title
stringlengths
2
145
content
stringlengths
86
178k
sustainable finance
Sustainable finance is the set of financial regulations, standards, norms and products that pursue an environmental objective. It allows the financial system to connect with the economy and its populations by financing its agents while maintaining a growth objective. The long-standing concept was promoted with the adoption of the Paris Climate Agreement, which stipulates that parties must make "finance flows consistent with a pathway towards low greenhouse gas emissions and climate-resilient development." In addition, sustainable finance had already a key role to play in the European Green Deal and in other EU International agreements, but since the COVID-19 pandemic its role is even more important. In 2015, the United Nations adopted the 2030 Agenda to steer the transition towards a sustainable and inclusive economy. This commitment involves 193 member states and comprises 17 goals and 169 targets. SDGs aimed at tackling current global challenges, including protecting the planet. Sustainable finance has become a key cornerstone for the achievement of these goals. Terminology The terminology is essential to understand the different concepts around sustainable finance and the differences. As a matter of fact, the United Nations Environment Programme (UNEP) defines three concepts that are different but often used as synonyms, namely, climate, green and sustainable finance. First, climate finance is a subset of environmental finance, it mainly refers to funds which are addressing climate change adaptation and mitigation. Then, green finance has a broader scope because it also covers other environmental issues such as biodiversity protection. Lastly, sustainable finance includes Environmental, Social and Corporate Governance (ESG) factors in its scope. Sustainable finance extends its domain to the three components of ESG; it is therefore the broadest term, covering all financing activities that contribute to sustainable development. International Initiative By signing the Paris Agreement, more than 190 countries have committed to fighting climate change and reducing environmental degradation. To reach the target of a maximum temperature increase of 2 °C, we need billions of green investments each year in key sectors of the global economy. Public finance will continue to play a key role, but a significant share of the funding will have to come from the private sector. Because financial markets are global, they offer a great opportunity, but this potential is largely untapped. Indeed, to mobilize international investors, it is necessary to promote integrated markets for environmentally sustainable finance at the global level.The UNFCCC and Paris Agreement's collective goal of mobilizing USD 100 billion per year by 2020 in the context of meaningful mitigation action and transparency on implementation fell short in 2018. Therefore, this requires a high degree of coherence between the different capital market frameworks and tools that are essential for investors to identify and seize green investment opportunities. This means working together to ensure the potential of financial markets, and it is in this context that the International Platform on Sustainable Finance has been created. International Platform on Sustainable Finance (IPSF) The International Platform on Sustainable Finance (IPSF) was launched on 18 October 2019 by the European Union. The platform is a multi-stakeholder forum for dialogue between policymakers tasked with developing regulatory measures for sustainable finance to help investors identify and seize sustainable investment opportunities that truly contribute to climate and environmental goals.The founding members of the IPSF are obviously the European Union, but also the competent authorities of Argentina, Canada, Chile, China, India, Kenya and Morocco. However, since its foundation, the Hong Kong Special Administrative Region of the People's Republic of China (HKSAR), Indonesia, Japan, Malaysia, New Zealand, Norway, Senegal, Singapore, Switzerland and the United Kingdom have also joined IPSF. Together, the 18 IPSF members represent 50% of the world's greenhouse gas emissions, 50% of the world's population and 45% of the world's GDP.There are also seven Observers of the International Platform, namely, the European Central Bank, European Investment Bank, OECD, UNEP, NGFS, OICV-IOSCO and The Coalition of Finance Ministers for Climate Action. The ultimate objectives of the IPSF are to scale up the mobilization of private capital towards environmentally sustainable finance at the global level and to promote integrated markets for environmentally sustainable finance to increase the amount of private capital invested in environmentally sustainable investments by enabling members to exchange and disseminate information to promote best practice, benchmark their different initiatives and identify barriers and opportunities for sustainable finance while respecting national and regional contexts. Where appropriate, willing members can work to align their initiatives and approaches. Sustainable Finance in China Development of Sustainable Finance in China China, as one of the world's largest economies and a global leader in environmental challenges, has taken significant strides in the development of sustainable finance. The country's journey toward integrating environmental, social, and governance (ESG) criteria into its financial system is characterized by a commitment to addressing climate change, promoting green investment, and adopting international best practices. Catalyst of Sustainable Finance in China Green Bond Market in China A pivotal moment in China's sustainable finance journey was the emergence of green bonds. In 2015, the People's Bank of China and the National Development and Reform Commission issued guidelines for green bond issuance . These guidelines established the framework for certifying and regulating green bonds, ushering in a new era of green investment in the country. The guidelines looked to help classify projects and set eligibility criteria within six environmental sectors. By the end of 2022 China had a cumulative labelled green bond volume of USD489bn (RMB 3.3tn). In June 2020, the People's Bank of China (PBoC), China's central bank, China securities and Regulatory Commission (CSRC), and National Development and Reform Commission released a Green Bond Endorsed Project Catalogue draft which looked to build an overarching guideline for green bonds in China. China has since become the world's largest issuer of green bonds, with both domestic and international issuers seeking to fund environmentally friendly projects. Notable examples of issuers include the Industrial and Commercial Bank of China (ICBC), which among the 40 green Kung Fu bond issuers ranked the largest with at about 6.75bn USD. Sustainable Finance in Hong Kong Hong Kong’s Financial Secretary, Paul Chan, delivered the 2023-24 budget on 22 February 2023 with the promotion of a green economy, sustainable development and China’s “3060 Dual Carbon Targets” at the forefront. Sustainable Finance and The European Union European Green Deal The European Green Deal is a proposal by the European Commission, approved in 2020, to put in place a series of policies to make Europe climate neutral by 2050 and to cut at least half of its CO2 emissions by 2030. Within it, the Commission has promised to raise no less than €1 trillion in order to achieve the objectives of the European Green Deal by making sustainable investments. Part of this money has been raised to finance the Next Generation EU. Sustainable finance is therefore one of the pillars on which the EU Green Deal focuses and in addition to its own investments, the Commission would also like to promote private investments by introducing taxonomy regulation. Next Generation EU More recently, the European Commission, on behalf of its 27 member states, is also making greater use of green finance, especially green bond (see green bonds section) to finance part of NextGenerationEU. The aim of this initiative is to relaunch the economy following COVID-19 pandemic and aims to improve the European Union on several levels including; making it greener, accelerating its digitalisation, improving the health system and preparing it for future challenges or supporting young people and making Europe more inclusive. The main project under this initiative is the Recovery and Resilience Facility (RRF) which provides grants and loan funding to EU member states to support reform and investment. In order to access these funds, each EU Member State must propose a plan which must be approved by the European Commission and then by the Council. One of the most important criteria of this plan is that at least 37% is dedicated to the green aspect and 20% to digitalisation. Disbursement is gradual, with 13% received after the contract is signed, and the remainder on the basis of a bi-annual evaluation based on a report submitted and a payment request. Tools and Standards Green bonds In order to actually green finance or make it more sustainable, specific tools may be required and some have already been developed. The main one is the green bond. Green bonds are loans issued in the market by a public or private organization to finance environmentally friendly activities. Their issuance is growing steadily with an average growth of over 50% per year over the last five years. They reached $170 billion in 2018 and $523 billion in 2021. The aim of this type of bond (finance) is to encourage the financing of green projects by attracting investors and therefore reducing the cost of borrowing. According to empirical studies, the high demand for this type of bond provides it with a lower yield than its standard equivalent. Some scientific papers such as Gabor & al. (2019) strongly recommend including this climate factor in the risk assessment of bonds. The aim is, on the one hand, to increase the borrowing cost of brown bonds which can fund carbon-intensive projects and de-incentivise their investment by increasing the weight of climate risk. On the other hand, the goal is to reduce the weight of risk of green bonds in order to stimulate investment and potentially encourage banks to reduce the interest rate of these bonds.From a legal point of view, green bonds are not really different from traditional bonds. The promises made to investors are not always included in the contract, and not often in a binding way. Issuers of green bonds usually follow standards and principles set by private-led organisations such as the International Capital Market Association (ICMA)'s Green Bond Principles or the label of the Climate bond initiative. The Paris agreement on climate change highlighted a desire to standardize reporting practices related to green bonds, in order to avoid greenwashing. To date, there are no regulations requiring the borrower to specify its "green" intentions in writing, however, the EU is currently developing a green bond standard which will force issuers to fund activities aligned with the EU taxonomy for sustainable activities. This standard is expected to be a voluntary standard, operating alongside other voluntary standards, with academics and practitioners raising the policymakers' awareness to the dangers of imposing it as a mandatory standard.The European Union has already created its own "Next Generation EU Green bonds framework" to use green bonds to raise part of the funds for the Next Generation EU project. This project promises an investment of 750 billion euros in grants and loans (at 2018 prices), by the European Commission, aiming to revive the post-covid-19 economy in the 27 EU member states. Up to 30% of the budget will be raised by issuing green bonds, which results in up to 250 million, and a total of 14.5 million had already been raised by January 2022. This will make the European Commission the largest issuer of green bonds.Empirical studies such as that conducted by Baldi and Pandimiglio (2022) show that the risk of greenwashing is present and may wrongly induce investors to accept lower rates of return than for brown investments. The standardization of this taxonomy would reduce the criticism of greenwashing that can be attributed to this type of obligation and enhance clarity and transparency in their use. Baldi and Pandimiglio (2022) further suggest that rating agencies focus more on this type of risk in order to identify and quantify it better. Taxonomy of sustainable activities Because energy transition is a broad concept and sustainability or green can apply to many projects (renewable energy, energy efficiency, waste management, water management, public transportation, reforestation...), several taxonomies are being established to evaluate and certify "green" investments (having no or very little impact on the environment). In 2018, the European Commission created a working group of technical experts on sustainable finance (TEG: Technical Expert Group) to define a classification of economic activities (the "taxonomy"), in order to have a robust methodology defining whether an activity or company is sustainable or not. The aim of the taxonomy is to prevent greenwashing and to help investors make greener choices. Investments are judged by six objectives: climate change mitigation, climate change adaptation, the circular economy, pollution, effect on water, and biodiversity.The taxonomy came into force in July 2020. The taxonomy is seen as the most comprehensive and sophisticated initiative of its type; it may inspire other countries to develop their own taxonomies or may indeed become the world's 'gold standard. However, when the disclosure regime comes into effect in January 2022 there will still be huge gaps in data and it may be several years before it becomes effective. The classifications of fossil gas and nuclear energy are controversial. The European Commission asked its Joint Research Centre to assess the environmental sustainability of nuclear. The results will be investigated for three months by two expert groups before the Commission makes a decision on the classification. Natural gas is seen by some countries as the bridge between coal and renewable energy, and those countries argue for natural gas to be considered sustainable under a set of conditions. In response, various members of the expert group that advises the European Commission threatened to step down. They stated they see the inclusion of gas as a contradiction to climate science, as methane emissions from the natural gas form are a significant greenhouse gas.The UK is working on its own separate taxonomy. Green-supporting factor on capital requirements To encourage banks to increase green lending, commercial banks have been proposing to introduce a "Green-supporting factor" on banks' capital requirements. This proposal is currently being considered by the European Commission and the European Banking Authority. However this approach is generally being opposed by central bankers and nonprofits organisations, which propose instead the adoption of higher capital requirements for assets linked with fossil fuels ("Brown-penalizing factor"). Mandatory and voluntary disclosure In addition, another tool and some standards lie in reporting and transparency. In 2015, the Financial Stability Board (FSB) launched the Taskforce on Climate-related Financial Disclosures (TCFD) which is led by Michael Bloomberg. The TCFD's recommendations aim to encourage companies to better disclose the climate-related risks in their business, as well as their internal governance enabling the management of these risks. In the United Kingdom, the Governor of the Bank of England, Mark Carney, has actively supported the TCFD's recommendations and has called on several occasions for the implementation of obligations for companies in the financial sector to be transparent and to take into account financial risks in their management, notably through climate stress tests. In France, the 2015 Energy Transition Law requires institutional investors to be transparent about their integration of Environmental, Social and Governance Criteria into their investment strategy.Nevertheless, empirical research has shown the limited effect of disclosure policies if they remain voluntary.In addition, in October 2022, the Corporate Sustainability Reporting Directive was adopted. This new reporting rule will apply to all large firms, whether listed on stock markets or not. Therefore, around 50,000 companies will be covered by new rules, compared to about 11,700 with the former set of rules. More precisely, the impact of an organization on the environment, human rights and social standards will be introduced in this CSRD. Indeed, this reporting directive asks for more detailed reporting requirements thanks to common criteria, in line with the EU’s climate goals. The Commission will adopt the first set of standards by June 2023 after that, the aim of the Commission is to enlarge more and more companies to this set of standards. Indeed, from 1 January 2026, the rules will apply to listed SMEs and other undertakings, with reports due in 2027. However, SMEs can opt out until 2028. Thanks to this new set of rules, the EU has become a front-runner in global sustainability reporting standards. Green Monetary Policy Policymakers, through their green monetary policies, help speed up the adoption of sustainable finance by supporting the development of investment instruments and fund structures tailored specifically to sustainable finance, creating incentives for investors, and establishing a regulatory agenda to standardize ESG measures of performance. Green Central Banking The term "Green Central Banking" refers to the critical role that central banks must play in achieving zero-net-emissions targets and mitigating climate change. By adjusting their monetary policies into “green monetary policy” and capital requirements, central banks can redirect investment into green financing. Network for Greening the Financial System (NGFS) In 2018, under the leadership of Mark Carney, Frank Elderson, and Banque de France Governor Villeroy de Galhau, eight central banks created the Network for Greening the Financial System (NGFS), a network of central banks and financial supervisors wanting to explore the potential role of central banks to accompany the energy transition. This network has nearly 116 central banks and supervisors and 19 observers including the International Monetary Fund (IMF) and the European Central Bank (ECB). Priorities for the NGFS include sharing best practices, advancing climate and environmental risk management in the financial sector, and mobilizing mainstream finance.Several policy options for greening monetary policy instruments have been explored by the NGFS: Green refinancing operations: central banks can adopt green conditions when banks refinance themselves from central banks, for example by granting a lower interest rate if banks issue a certain volume of loans for green projects. Green collateral frameworks: central banks can restrict collateral eligibility rules by excluding polluting assets, or requiring banks to mobilize a pool of assets that is aligned with net zero trajectories. Green quantitative easing: central banks could restrict their asset purchases programmes to green bonds.The NGFS, through its working group “Workstream 2”, has published new Scenarios for central banks and supervisors in September 2022 in partnership with an academic consortium. The NGFS Scenarios were developed to assess the impact of climate change on the global economy and financial markets. While developed primarily for use by central banks and supervisors, they may be valuable to the broader business sector, government, and academics as well. European Central Bank’s Financial Commitment to Addressing Climate Change During the United Nations Climate Change Conference (COP 26), in July 2021, under the leadership of Christine Lagarde and after pressure from NGOs, the ECB committed to contributing to the implementation of the Paris Agreement's aim of “making finance flows consistent with a pathway towards low greenhouse gas emissions and climate-resilient development”. (Article 2.1. (c) of the Paris Agreement, 2015) The ECB also announced a detailed roadmap to incorporate climate change in its monetary policy framework. The action plan includes measures to integrate climate-risks metrics in the ECB's collateral framework and corporate sector purchase programme (CSPP) referred to bonds. Christine Lagarde said she was also in favour of developing "green lending facilities" like the Bank of Japan and People's Bank of China. Action Plan of the ECB on Climate Change In accordance with its recent decisions, the ECB commits to contributing to the Paris Agreement goals and NGFS initiatives within its mandate by taking the following specific actions: Integrating climate-related risks into financial stability monitoring and prudential supervision of bank Integrating sustainability factors into own portfolio management Exploring the effects of climate-related risks on the Eurosystem monetary policy framework within our mandate Bridging data gaps in climate-related data Working towards higher awareness and intellectual capacity, also through technical assistance and knowledge sharing Debate There are a few concerns and limitations that can be attributed to sustainable finance. The important number of standards First, as already seen, the concept of sustainable finance is directly linked with ESG. However, there are still no universally adopted standards for how companies and organisations can measure and report on their sustainability performance. Instead, we have a large number of NGOs working independently to develop standards for sustainability reporting, which is creating complexity and confusion for companies and investors. Indeed, the initiators of reforms in sustainable finance can be very different. There are initiatives from non-governmental organisations such as Global Reporting Initiative (GRI), IFRS Foundation, the International Integrated Reporting Council (IIRC) and the Carbon Disclosure Project. However, recently, it seems like the IFRS Foundation is taking the lead. This is possible because the organisation possess a deep expertise in the standard-setting process, it also have a legitimacy in the corporate and investor community, and regulators support it internationally. Then, since sustainable finance is rather new and above all, a constantly evolving topic with an important number of actors, it is impossible to find a constant framework overtime. For example, a new framework for sustainable finance, ISO 32210 was published in October 2022. This tool provides guidance to all organisations, active in the financial sector, including, but not limited to, direct lenders and investors, asset managers and service providers, on the implementation of sustainability principles, practices and terminology for financing activities.Because of this pool of standards and the constant evolution, it is not unusual hat some funds or companies are not as green as they claim to be. Indeed, some ESG funds still hold shares in oil and coal companies. However, since there are no universally adopted standards, this practice is still ongoing.Lastly, it is important to mention that the focus here was almost exclusively on the European Union, at an international level, the lack of homogeneity on sustainable finance norms and standards is even larger. However, initiatives such as the International Platform on Sustainable Finance open the discussion and the exchange of best practices to have more international norms and standards. Lack of comparability In addition, the same actors also face a lack of comparability. Indeed, it is very difficult to compare companies and investments on the basis of their ESG performance. Taking again the example of the oil and gas industry, the reporting on sustainability is carried in varied ways. Indeed, according to a study conducted by researchers at the University of Perugia's Economics Department, out of 51 relevant GRI indicators, only four indicators appear in over 75% of the companies' GRI reports. In addition, it is even difficult, sometimes, to compare the performance of the same company or fund overtime. From one year to the next, due to changes in methodology, companies consulted, or decisions related to the use of different standards to measure the same thing, comparing the performance can be nearly impossible. Green Central Banking legitimacy Another concern worth debating in sustainable finance is the legitimacy of Green Central Banking. First, in response to the recent global financial crisis, which started with the outbreak of the pandemic, there has been a strong reliance on central banks to intervene not only for their traditional prudential motives of ensuring price and financial stability but also for more promotional purposes as a means of supporting other policy objectives such as promoting a low-carbon economy (Baer et al. 2021). However, according to many researchers, the pursuit of such promotional goals in monetary policy decisions raises serious questions about the legitimacy of independent central banks (Fontan et al. 2016). By way of illustration, Greenpeace protestors claimed in March 2021 that the European Central Bank's (ECB) monetary policies subsidise fossil fuel companies (Treeck, 2021).Furthermore, the Central Bank Independence (CBI) framework says that central banks should be permitted to operate independently within a limited mandate (Dietsch et al., 2018), although other writers feel that changing the central bank's mandate is insufficient (Fontan et al. 2022).Central banks are rarely tasked with advancing environmental or climate change mitigation objectives. When it comes to these environmental policies, central banks must deal with arbitrary issues, and there is no agreement on who should bear the burden. Neither conservative nor progressive central bankers defend this dilemma (Fontan et al. 2022). As a result, according to the previous authors, their pursuit of green monetary policies puts central banks in a tough spot, casting doubt on their legitimacy. In a nutshell, Baer and co-authors argue that central banks may their legitimacy issues by working in tandem with elected officials. In other words, a thorough examination of the actions of central banks necessitates a close examination of the actions of the governments and parliaments that formulate the central bank's mandate (Elgie 2002). Whether it's through working with a green investment bank to reduce their carbon footprint or forming joint committees of central bankers and members of parliament to influence the types of assets they purchase (Fontan et al. 2022). == References ==
hong kong
Hong Kong (US: or UK: ; Chinese: 香港; Jyutping: hoeng1 gong2, Cantonese: [hœ́ːŋ.kɔ̌ːŋ] ), officially the Hong Kong Special Administrative Region of the People's Republic of China (abbr. Hong Kong SAR or HKSAR), is a city and a special administrative region in China. With 7.4 million residents of various nationalities in a 1,104-square-kilometre (426 sq mi) territory, Hong Kong is one of the most densely populated territories in the world. Hong Kong was established as a colony of the British Empire after the Qing Empire ceded Hong Kong Island in 1841–1842. The colony expanded to the Kowloon Peninsula in 1860 and was further extended when the United Kingdom obtained a 99-year lease of the New Territories in 1898. Hong Kong was occupied by Japan from 1941 to 1945 during World War II. The whole territory was transferred from the United Kingdom to China in 1997. Hong Kong maintains separate governing and economic systems from that of mainland China under the principle of "one country, two systems".Originally a sparsely populated area of farming and fishing villages, the territory is now one of the world's most significant financial centres and commercial ports. Hong Kong is the world's fourth-ranked global financial centre, ninth-largest exporter, and eighth-largest importer. Its currency, the Hong Kong dollar, is the ninth most traded currency in the world. Home to the second-highest number of billionaires of any city in the world, Hong Kong has the largest number of ultra high-net-worth individuals. Although the city has one of the highest per capita incomes in the world, severe income inequality exists among the population. Despite having the largest number of skyscrapers of any city in the world, housing in Hong Kong has been well-documented to experience a chronic persistent shortage. Hong Kong is a highly developed territory and has a Human Development Index (HDI) of 0.952, ranking fourth in the world. The city has the highest life expectancy in the world, and a public transport rate exceeding 90%. Etymology The name of the territory, first romanised as "He-Ong-Kong" in 1780, originally referred to a small inlet located between Aberdeen Island and the southern coast of Hong Kong Island. Aberdeen was an initial point of contact between British sailors and local fishermen. Although the source of the romanised name is unknown, it is generally believed to be an early phonetic rendering of the Cantonese (or Tanka Cantonese) phrase hēung góng. The name translates as "fragrant harbour" or "incense harbour". "Fragrant" may refer to the sweet taste of the harbour's freshwater influx from the Pearl River or to the odour from incense factories lining the coast of northern Kowloon. The incense was stored near Aberdeen Harbour for export before Victoria Harbour was developed. Sir John Davis (the second colonial governor) offered an alternative origin; Davis said that the name derived from "Hoong-keang" ("red torrent"), reflecting the colour of soil over which a waterfall on the island flowed.The simplified name Hong Kong was frequently used by 1810. The name was also commonly written as the single word Hongkong until 1926, when the government officially adopted the two-word name. Some corporations founded during the early colonial era still keep this name, including Hongkong Land, Hongkong Electric Company, Hongkong and Shanghai Hotels and the Hongkong and Shanghai Banking Corporation (HSBC). History Prehistory and Imperial China Earliest known human traces in what is now Hong Kong are dated by some to 35,000 and 39,000 years ago during the Paleolithic period. The claim is based on an archaeological investigation in Wong Tei Tung, Sai Kung in 2003. The archaeological works revealed knapped stone tools from deposits that were dated using optical luminescence dating.During the Middle Neolithic period, about 6,000 years ago, the region had been widely occupied by humans. Neolithic to Bronze Age Hong Kong settlers were semi-coastal people. Early inhabitants are believed to be Austronesians in the Middle Neolithic period and later the Yueh people. As hinted by the archaeological works in Sha Ha, Sai Kung, rice cultivation had been introduced since Late Neolithic period. Bronze Age Hong Kong featured coarse pottery, hard pottery, quartz and stone jewelry, as well as small bronze implements.The Qin dynasty incorporated the Hong Kong area into China for the first time in 214 BCE, after conquering the indigenous Baiyue. The region was consolidated under the Nanyue kingdom (a predecessor state of Vietnam) after the Qin collapse and recaptured by China after the Han conquest. During the Mongol conquest of China in the 13th century, the Southern Song court was briefly located in modern-day Kowloon City (the Sung Wong Toi site) before its final defeat in the 1279 Battle of Yamen by the Yuan Dynasty. By the end of the Yuan dynasty, seven large families had settled in the region and owned most of the land. Settlers from nearby provinces migrated to Kowloon throughout the Ming dynasty.The earliest European visitor was Portuguese explorer Jorge Álvares, who arrived in 1513. Portuguese merchants established a trading post called Tamão in Hong Kong waters and began regular trade with southern China. Although the traders were expelled after military clashes in the 1520s, Portuguese-Chinese trade relations were re-established by 1549. Portugal acquired a permanent lease for Macau in 1557.After the Qing conquest, maritime trade was banned under the Haijin policies. From 1661 to 1683, the population of most of the area forming present day Hong Kong was cleared under the Great Clearance, turning the region into a wasteland. The Kangxi Emperor lifted the maritime trade prohibition, allowing foreigners to enter Chinese ports in 1684. Qing authorities established the Canton System in 1757 to regulate trade more strictly, restricting non-Russian ships to the port of Canton. Although European demand for Chinese commodities like tea, silk, and porcelain was high, Chinese interest in European manufactured goods was insignificant, so that Chinese goods could only be bought with precious metals. To reduce the trade imbalance, the British sold large amounts of Indian opium to China. Faced with a drug crisis, Qing officials pursued ever more aggressive actions to halt the opium trade. British colony In 1839, the Daoguang Emperor rejected proposals to legalise and tax opium and ordered imperial commissioner Lin Zexu to eradicate the opium trade. The commissioner destroyed opium stockpiles and halted all foreign trade, triggering a British military response and the First Opium War. The Qing surrendered early in the war and ceded Hong Kong Island in the Convention of Chuenpi. British forces began controlling Hong Kong shortly after the signing of the convention, from 26 January 1841. However, both countries were dissatisfied and did not ratify the agreement. After more than a year of further hostilities, Hong Kong Island was formally ceded to the United Kingdom in the 1842 Treaty of Nanking.Administrative infrastructure was quickly built by early 1842, but piracy, disease, and hostile Qing policies initially prevented the government from attracting commerce. Conditions on the island improved during the Taiping Rebellion in the 1850s, when many Chinese refugees, including wealthy merchants, fled mainland turbulence and settled in the colony. Further tensions between the British and Qing over the opium trade escalated into the Second Opium War. The Qing were again defeated and forced to give up Kowloon Peninsula and Stonecutters Island in the Convention of Peking. By the end of this war, Hong Kong had evolved from a transient colonial outpost into a major entrepôt. Rapid economic improvement during the 1850s attracted foreign investment, as potential stakeholders became more confident in Hong Kong's future.The colony was further expanded in 1898 when the United Kingdom obtained a 99-year lease of the New Territories. The University of Hong Kong was established in 1911 as the territory's first institution of higher education. Kai Tak Airport began operation in 1924, and the colony avoided a prolonged economic downturn after the 1925–26 Canton–Hong Kong strike. At the start of the Second Sino-Japanese War in 1937, Governor Geoffry Northcote declared Hong Kong a neutral zone to safeguard its status as a free port. The colonial government prepared for a possible attack, evacuating all British women and children in 1940. The Imperial Japanese Army attacked Hong Kong on 8 December 1941, the same morning as its attack on Pearl Harbor. Hong Kong was occupied by Japan for almost four years before the British resumed control on 30 August 1945. Its population rebounded quickly after the war, as skilled Chinese migrants fled from the Chinese Civil War and more refugees crossed the border when the Chinese Communist Party took control of mainland China in 1949. Hong Kong became the first of the Four Asian Tiger economies to industrialise during the 1950s. With a rapidly increasing population, the colonial government attempted reforms to improve infrastructure and public services. The public-housing estate programme, Independent Commission Against Corruption, and Mass Transit Railway were all established during the post-war decades to provide safer housing, integrity in the civil service, and more reliable transportation.Nevertheless, widespread public discontent resulted in multiple protests from the 1950s to 1980s, including pro-Republic of China and pro-Chinese Communist Party protests. In the 1967 Hong Kong riots, pro-PRC protestors clashed with the British colonial government. As many as 51 were killed and 802 were injured in the violence, including dozens killed by the Royal Hong Kong Police via beatings and shootings.Although the territory's competitiveness in manufacturing gradually declined because of rising labour and property costs, it transitioned to a service-based economy. By the early 1990s, Hong Kong had established itself as a global financial centre and shipping hub. Chinese special administrative region The colony faced an uncertain future as the end of the New Territories lease approached, and Governor Murray MacLehose raised the question of Hong Kong's status with Deng Xiaoping in 1979. Diplomatic negotiations with China resulted in the 1984 Sino-British Joint Declaration, in which the United Kingdom agreed to transfer the colony in 1997 and China would guarantee Hong Kong's economic and political systems for 50 years after the transfer. The impending transfer triggered a wave of mass emigration as residents feared an erosion of civil rights, the rule of law, and quality of life. Over half a million people left the territory during the peak migration period, from 1987 to 1996. The Legislative Council became a fully elected legislature for the first time in 1995 and extensively expanded its functions and organisations throughout the last years of the colonial rule. Hong Kong was transferred to China on 1 July 1997, after 156 years of British rule.Immediately after the transfer, Hong Kong was severely affected by several crises. The Hong Kong government was forced to use substantial foreign exchange reserves to maintain the Hong Kong dollar's currency peg during the 1997 Asian financial crisis, and the recovery from this was muted by an H5N1 avian-flu outbreak and a housing surplus. This was followed by the 2003 SARS epidemic, during which the territory experienced its most serious economic downturn.Political debates after the transfer of sovereignty have centred around the region's democratic development and the Chinese central government's adherence to the "one country, two systems" principle. After reversal of the last colonial era Legislative Council democratic reforms following the handover, the regional government unsuccessfully attempted to enact national security legislation pursuant to Article 23 of the Basic Law. The central government decision to implement nominee pre-screening before allowing chief executive elections triggered a series of protests in 2014 which became known as the Umbrella Revolution. Discrepancies in the electoral registry and disqualification of elected legislators after the 2016 Legislative Council elections and enforcement of national law in the West Kowloon high-speed railway station raised further concerns about the region's autonomy. In June 2019, mass protests erupted in response to a proposed extradition amendment bill permitting the extradition of fugitives to mainland China. The protests are the largest in Hong Kong's history, with organisers claiming to have attracted more than three million Hong Kong residents. The Hong Kong regional government and Chinese central government responded to the protests with a number of administrative measures to quell dissent. In June 2020, the Legislative Council passed the National Anthem Ordinance, which criminalised "insults to the national anthem of China". The Chinese central government meanwhile enacted the Hong Kong national security law to help quell protests in the region. Nine months later, in March 2021, the Chinese central government introduced amendments to Hong Kong's electoral system, which included the reduction of directly elected seats in the Legislative Council and the requirement that all candidates be vetted and approved by a Beijing-appointed Candidate Eligibility Review Committee. In May 2023, the Legislative Council introduced legislation to reduce the number of directly elected seats in the district councils as well, and a District Council Eligibility Review Committee was similarly established to vet candidates. Government and politics Hong Kong is a special administrative region of China, with executive, legislative, and judicial powers devolved from the national government. The Sino-British Joint Declaration provided for economic and administrative continuity through the transfer of sovereignty, resulting in an executive-led governing system largely inherited from the territory's history as a British colony. Under these terms and the "one country, two systems" principle, the Basic Law of Hong Kong is the regional constitution. The regional government is composed of three branches: Executive: The Chief Executive is responsible for enforcing regional law, can force reconsideration of legislation, and appoints Executive Council members and principal officials. Acting with the Executive Council, the Chief Executive-in-Council can propose new bills, issue subordinate legislation, and has authority to dissolve the legislature. In states of emergency or public danger, the Chief Executive-in-Council is further empowered to enact any regulation necessary to restore public order. Legislature: The unicameral Legislative Council enacts regional law, approves budgets, and has the power to impeach a sitting chief executive. Judiciary: The Court of Final Appeal and lower courts interpret laws and overturn those inconsistent with the Basic Law. Judges are appointed by the chief executive on the advice of a recommendation commission.The chief executive is the head of government and serves for a maximum of two five-year terms. The State Council (led by the Premier of China) appoints the chief executive after nomination by the Election Committee, which is composed of 1500 business, community, and government leaders.The Legislative Council has 90 members, each serving a four-year term. Twenty are directly elected from geographical constituencies, thirty-five represent functional constituencies (FC), and forty are chosen by an election committee consisting of representatives appointed by the Chinese central government. Thirty FC councillors are selected from limited electorates representing sectors of the economy or special interest groups, and the remaining five members are nominated from sitting district council members and selected in region-wide double direct elections. All popularly elected members are chosen by proportional representation. The 30 limited electorate functional constituencies fill their seats using first-past-the-post or instant-runoff voting.Twenty-two political parties had representatives elected to the Legislative Council in the 2016 election. These parties have aligned themselves into three ideological groups: the pro-Beijing camp (the current government), the pro-democracy camp, and localist groups. The Chinese Communist Party does not have an official political presence in Hong Kong, and its members do not run in local elections. Hong Kong is represented in the National People's Congress by 36 deputies chosen through an electoral college and 203 delegates in the Chinese People's Political Consultative Conference appointed by the central government. Chinese national law does not generally apply in the region, and Hong Kong is treated as a separate jurisdiction. Its judicial system is based on common law, continuing the legal tradition established during British rule. Local courts may refer to precedents set in English law and overseas jurisprudence. However, mainland criminal procedure law applies to cases investigated by the Office for Safeguarding National Security of the CPG in the HKSAR. Interpretative and amending power over the Basic Law and jurisdiction over acts of state lie with the central authority, making regional courts ultimately subordinate to the mainland's socialist civil law system. Decisions made by the Standing Committee of the National People's Congress override any territorial judicial process. Furthermore, in circumstances where the Standing Committee declares a state of emergency in Hong Kong, the State Council may enforce national law in the region.The territory's jurisdictional independence is most apparent in its immigration and taxation policies. The Immigration Department issues passports for permanent residents which differ from those of the mainland or Macau, and the region maintains a regulated border with the rest of the country. All travellers between Hong Kong and China and Macau must pass through border controls, regardless of nationality. Mainland Chinese citizens do not have right of abode in Hong Kong and are subject to immigration controls. Public finances are handled separately from the national government; taxes levied in Hong Kong do not fund the central authority.The Hong Kong Garrison of the People's Liberation Army is responsible for the region's defence. Although the Chairman of the Central Military Commission is supreme commander of the armed forces, the regional government may request assistance from the garrison. Hong Kong residents are not required to perform military service, and current law has no provision for local enlistment, so its defence is composed entirely of non-Hongkongers.The central government and Ministry of Foreign Affairs handle diplomatic matters, but Hong Kong retains the ability to maintain separate economic and cultural relations with foreign nations. The territory actively participates in the World Trade Organization, the Asia-Pacific Economic Cooperation forum, the International Olympic Committee, and many United Nations agencies. The regional government maintains trade offices in Greater China and other nations.The imposition of the Hong Kong national security law by the central government in Beijing in June 2020 resulted in the suspension of bilateral extradition treaties by the United Kingdom, Canada, Australia, New Zealand, Finland, and Ireland. The United States ended its preferential economic and trade treatment of Hong Kong in July 2020 because it was no longer able to distinguish Hong Kong as a separate entity from the People's Republic of China. Administrative divisions The territory is divided into 18 districts, each represented by a district council. These advise the government on local issues such as public facility provisioning, community programme maintenance, cultural promotion, and environmental policy. As of 2019, there are a total of 479 district council seats, 452 of which are directly elected. Rural committee chairmen, representing outlying villages and towns, fill the 27 non-elected seats. In May 2023, the government proposed reforms to the District Council electoral system which further cut the number of directly elected seats from 452 to 88, and total seats from 479 to 470. A requirement that district council candidates be vetted and approved by the District Council Eligibility Review Committee was also proposed. The Legislative Council approved the reforms in July 2023. Political reforms and sociopolitical issues Hong Kong is governed by a hybrid regime that is not fully representative of the population. Legislative Council members elected by functional constituencies composed of professional and special interest groups are accountable to these narrow corporate electorates and not the general public. This electoral arrangement has guaranteed a pro-establishment majority in the legislature since the transfer of sovereignty. Similarly, the chief executive is selected by establishment politicians and corporate members of the Election Committee rather than directly elected. Although universal suffrage for the chief executive and all Legislative Council elections are defined goals of Basic Law Articles 45 and 68, the legislature is only partially directly elected, and the executive continues to be nominated by an unrepresentative body. The government has been repeatedly petitioned to introduce direct elections for these positions.Ethnic minorities (except those of European ancestry) have marginal representation in government and often experience discrimination in housing, education, and employment. Employment vacancies and public service appointments frequently have language requirements which minority job seekers do not meet, and language education resources remain inadequate for Chinese learners. Foreign domestic helpers, predominantly women from the Philippines and Indonesia, have little protection under regional law. Although they live and work in Hong Kong, these workers are not treated as ordinary residents and do not have the right of abode in the territory. Sex trafficking in Hong Kong is an issue. Local and foreign women and girls are often forced into prostitution in brothels, homes, and businesses in the city.The Joint Declaration guarantees the Basic Law of Hong Kong for 50 years after the transfer of sovereignty. It does not specify how Hong Kong will be governed after 2047, and the central government's role in determining the territory's future system of government is the subject of political debate and speculation. Hong Kong's political and judicial systems may be integrated with China's at that time, or the territory may continue to be administered separately. However, in response to large-scale protests in 2019 and 2020, the Standing Committee of the National People's Congress passed the controversial Hong Kong national security law. The law criminalises secession, subversion, terrorism and collusion with foreign elements and establishes the Office for Safeguarding National Security of the CPG in the HKSAR, an investigative office under Central People's Government authority immune from HKSAR jurisdiction. Some of the aforementioned acts were previously considered protected speech under Hong Kong law. The United Kingdom considers the law to be a serious violation of the Joint Declaration. In October 2020, Hong Kong police arrested seven pro-democracy politicians over tussles with pro-Beijing politicians in the Legislative Council in May. They were charged with contempt and interfering with members of the council, while none of the pro-Beijing lawmakers were detained. Annual commemorations of the 1989 Tiananmen Square protests and massacre were also cancelled amidst fears of violating the national security law. In March 2021, the Chinese central government unilaterally changed Hong Kong's electoral system and established the Candidate Eligibility Review Committee, which would be tasked with screening and evaluating political candidates for their "patriotism". Geography Hong Kong is on China's southern coast, 60 km (37 mi) east of Macau, on the east side of the mouth of the Pearl River estuary. It is surrounded by the South China Sea on all sides except the north, which neighbours the Guangdong city of Shenzhen along the Sham Chun River. The territory's 1,110.18 km2 (428.64 sq mi) area (2,754.97 km2 if the maritime area is included) consists of Hong Kong Island, the Kowloon Peninsula, the New Territories, Lantau Island, and over 200 other islands. Of the total area, 1,073 km2 (414 sq mi) is land and 35 km2 (14 sq mi) is water. The territory's highest point is Tai Mo Shan, 957 metres (3,140 ft) above sea level. Urban development is concentrated on the Kowloon Peninsula, Hong Kong Island, and in new towns throughout the New Territories. Much of this is built on reclaimed land; 70 km2 (27 sq mi) (6% of the total land or about 25% of developed space in the territory) is reclaimed from the sea.Undeveloped terrain is hilly to mountainous, with very little flat land, and consists mostly of grassland, woodland, shrubland, or farmland. About 40% of the remaining land area is country parks and nature reserves. The territory has a diverse ecosystem; over 3,000 species of vascular plants occur in the region (300 of which are native to Hong Kong), and thousands of insect, avian, and marine species. Climate Hong Kong has a humid subtropical climate (Köppen Cwa), characteristic of southern China, despite being located south of the Tropic of Cancer. Summers are long, hot and humid, with occasional showers and thunderstorms and warm air from the southwest. The humid nature of Hong Kong exacerbates the warmth of summer. Typhoons occur most often then, sometimes resulting in floods or landslides. Winters are short, mild and usually sunny at the beginning, becoming cloudy towards February. Frequent cold fronts bring strong, cooling winds from the north and occasionally result in chilly weather. Autumn is the sunniest season, whilst spring is generally cloudy. When there is snowfall, which is extremely rare, it is usually at high elevations. Hong Kong averages 1,709 hours of sunshine per year. Historic temperature extremes at the Hong Kong Observatory are 36.6 °C (97.9 °F) on 22 August 2017 and 0.0 °C (32.0 °F) on 18 January 1893. The highest and lowest recorded temperatures in all of Hong Kong are 39.0 °C (102 °F) at Wetland Park on 22 August 2017, and −6.0 °C (21.2 °F) at Tai Mo Shan on 24 January 2016. Architecture Hong Kong has the world's largest number of skyscrapers, with 482 towers taller than 150 metres (490 ft), and the third-largest number of high-rise buildings in the world. The lack of available space restricted development to high-density residential tenements and commercial complexes packed closely together on buildable land. Single-family detached homes are uncommon and generally only found in outlying areas. The International Commerce Centre and Two International Finance Centre are the tallest buildings in Hong Kong and are among the tallest in the Asia-Pacific region. Other distinctive buildings lining the Hong Kong Island skyline include the HSBC Main Building, the anemometer-topped triangular Central Plaza, the circular Hopewell Centre, and the sharp-edged Bank of China Tower.Demand for new construction has contributed to frequent demolition of older buildings, freeing space for modern high-rises. However, many examples of European and Lingnan architecture are still found throughout the territory. Older government buildings are examples of colonial architecture. The 1846 Flagstaff House, the former residence of the commanding British military officer, is the oldest Western-style building in Hong Kong. Some (including the Court of Final Appeal Building and the Hong Kong Observatory) retain their original function, and others have been adapted and reused; the Former Marine Police Headquarters was redeveloped into a commercial and retail complex, and Béthanie (built in 1875 as a sanatorium) houses the Hong Kong Academy for Performing Arts. The Tin Hau Temple, dedicated to the sea goddess Mazu (originally built in 1012 and rebuilt in 1266), is the territory's oldest existing structure. The Ping Shan Heritage Trail has architectural examples of several imperial Chinese dynasties, including the Tsui Sing Lau Pagoda (Hong Kong's only remaining pagoda).Tong lau, mixed-use tenement buildings constructed during the colonial era, blended southern Chinese architectural styles with European influences. These were especially prolific during the immediate post-war period, when many were rapidly built to house large numbers of Chinese migrants. Examples include Lui Seng Chun, the Blue House in Wan Chai, and the Shanghai Street shophouses in Mong Kok. Mass-produced public-housing estates, built since the 1960s, are mainly constructed in modernist style. Demographics The Census and Statistics Department estimated Hong Kong's population at 7,413,070 in 2021. The overwhelming majority (91.6%) is Han Chinese, most of whom are Taishanese, Teochew, Hakka, and other Cantonese peoples. The remaining 8.4% are non-ethnic Chinese minorities, primarily Filipinos, Indonesians, and South Asians. However, most Filipinos and Indonesians in Hong Kong are short-term workers. According to a 2021 thematic report by the Hong Kong government, after excluding foreign domestic helpers, the real number of non-Chinese ethnic minorities in the city was 301,344, or 4% of Hong Kong's population. About half the population have some form of British nationality, a legacy of colonial rule; 3.4 million residents have British National (Overseas) status, and 260,000 British citizens live in the territory. The vast majority also hold Chinese nationality, automatically granted to all ethnic Chinese residents at the transfer of sovereignty. Headline population density exceeds 7,060 people/km2, and is the fourth-highest in the world.The predominant language is Cantonese, a variety of Chinese originating in Guangdong. It is spoken by 93.7% of the population, 88.2% as a first language and 5.5% as a second language. Slightly over half the population (58.7%) speaks English, the other official language; 4.6% are native speakers, and 54.1% speak English as a second language. Code-switching, mixing English and Cantonese in informal conversation, is common among the bilingual population. Post-handover governments have promoted Mandarin, which is currently about as prevalent as English; 54.2% of the population speak Mandarin, with 2.3% native speakers and 51.9% as a second language. Traditional Chinese characters are used in writing, rather than the simplified characters used in the mainland. Among the religious population, the traditional "three teachings" of China, Buddhism, Confucianism, and Taoism, have the most adherents (20%), followed by Christianity (12%) and Islam (4%). Followers of other religions, including Sikhism, Hinduism, and Judaism, generally originate from regions where their religion predominates.Life expectancy in Hong Kong was 81.3 years for males and 87.2 years for females in 2022, one of the highest in the world. Cancer, pneumonia, heart disease, cerebrovascular disease, and accidents are the territory's five leading causes of death. The universal public healthcare system is funded by general-tax revenue, and treatment is highly subsidised; on average, 95% of healthcare costs are covered by the government.The city has a severe amount of income inequality, which has risen since the transfer of sovereignty, as the region's ageing population has gradually added to the number of nonworking people. Although median household income steadily increased during the decade to 2016, the wage gap remained high; the 90th percentile of earners receive 41% of all income. The city has the most billionaires per capita, with one billionaire per 109,657 people, as well as the second-highest number of billionaires of any city in the world, the highest number of billionaires of any city in Asia, and the largest concentration of ultra high-net-worth individuals of any city in the world. Despite government efforts to reduce the growing disparity, median income for the top 10% of earners is 44 times that of the bottom 10%. Economy One of the world's most significant financial centres and commercial ports, Hong Kong has a market economy focused on services, characterised by low taxation, minimal government market intervention, and an established international financial market. It is the world's 35th-largest economy, with a nominal GDP of approximately US$373 billion. Hong Kong's economy ranked at the top of the Heritage Foundation's economic freedom index between 1995 and 2021. However, Hong Kong was removed from the index by the Heritage Foundation in 2021, with the Foundation citing a "loss of political freedom and autonomy ... [making Hong Kong] almost indistinguishable in many respects from other major Chinese commercial centers like Shanghai and Beijing". Hong Kong is highly developed, and ranks fourth on the UN Human Development Index. The Hong Kong Stock Exchange is the seventh-largest in the world, with a market capitalisation of HK$30.4 trillion (US$3.87 trillion) as of December 2018. Hong Kong is ranked as the 17th most innovative territory in the Global Innovation Index in 2023, and 3rd in the Global Financial Centres Index. The city is sometimes referred to as "Silicon Harbor", a nickname derived from Silicon Valley in California. Hong Kong hosts several high tech and innovation companies, including several multinational companies.Hong Kong is the ninth largest trading entity in exports and eighth largest in imports (2021), trading more goods in value than its gross domestic product. Over half of its cargo throughput consists of transshipments (goods travelling through Hong Kong). Products from mainland China account for about 40% of that traffic. The city's location allowed it to establish a transportation and logistics infrastructure which includes the world's seventh-busiest container port and the busiest airport for international cargo. The territory's largest export markets are mainland China and the United States. Hong Kong is a key part of the 21st Century Maritime Silk Road. It has little arable land and few natural resources, importing most of its food and raw materials. More than 90% of Hong Kong's food is imported, including nearly all of its meat and rice. Agricultural activity is 0.1% of GDP and consists of growing premium food and flower varieties.Although the territory had one of Asia's largest manufacturing economies during the latter half of the colonial era, Hong Kong's economy is now dominated by the service sector. The sector generates 92.7% of economic output, with the public sector accounting for about 10%. Between 1961 and 1997 Hong Kong's gross domestic product increased by a factor of 180, and per capita GDP increased by a factor of 87. The territory's GDP relative to mainland China's peaked at 27% in 1993; it fell to less than 3% in 2017, as the mainland developed and liberalised its economy. Economic and infrastructure integration with China has increased significantly since the 1978 start of market liberalisation on the mainland. Since resumption of cross-boundary train service in 1979, many rail and road links have been improved and constructed, facilitating trade between regions. The Closer Economic Partnership Arrangement formalised a policy of free trade between the two areas, with each jurisdiction pledging to remove remaining obstacles to trade and cross-boundary investment. A similar economic partnership with Macau details the liberalisation of trade between the special administrative regions. Chinese companies have expanded their economic presence in the territory since the transfer of sovereignty. Mainland firms represent over half of the Hang Seng Index value, up from 5% in 1997. As the mainland liberalised its economy, Hong Kong's shipping industry faced intense competition from other Chinese ports. Half of China's trade goods were routed through Hong Kong in 1997, dropping to about 13% by 2015. The territory's minimal taxation, common law system, and civil service attract overseas corporations wishing to establish a presence in Asia. The city has the second-highest number of corporate headquarters in the Asia-Pacific region. Hong Kong is a gateway for foreign direct investment in China, giving investors open access to mainland Chinese markets through direct links with the Shanghai and Shenzhen stock exchanges. The territory was the first market outside mainland China for renminbi-denominated bonds, and is one of the largest hubs for offshore renminbi trading. In November 2020, Hong Kong's Financial Services and the Treasury Bureau proposed a new law that will restrict cryptocurrency trading to professional investors only, leaving amateur traders (93% of Hong Kong's trading population) out of the market. The Hong Kong dollar, the local currency, is the eighth most traded currency in the world. Due to extremely compact house sizes and the extremely high housing density, the city has the most expensive housing market in the world.The government has had a passive role in the economy. Colonial governments had little industrial policy and implemented almost no trade controls. Under the doctrine of "positive non-interventionism", post-war administrations deliberately avoided the direct allocation of resources; active intervention was considered detrimental to economic growth. While the economy transitioned to a service basis during the 1980s, late colonial governments introduced interventionist policies. Post-handover administrations continued and expanded these programmes, including export-credit guarantees, a compulsory pension scheme, a minimum wage, anti-discrimination laws, and a state mortgage backer.Tourism is a major part of the economy, accounting for 5% of GDP. In 2016, 26.6 million visitors contributed HK$258 billion (US$32.9 billion) to the territory, making Hong Kong the 14th most popular destination for international tourists. It is the most popular Chinese city for tourists, receiving over 70% more visitors than its closest competitor (Macau). The city is ranked as one of the most expensive cities for expatriates. However, since 2020, there has been a sharp decline in incoming visitors due to tight COVID-19 travel restrictions. Additionally, due to the closure of Russian airspace in 2022, multiple airlines decided to cease their operations in Hong Kong. In an attempt to attract tourists back to Hong Kong, the Hong Kong government announced plans to give away 500,000 free airline tickets in 2023. Infrastructure Transport Hong Kong has a highly developed, sophisticated transport network. Over 90% of daily trips are made on public transport, the highest percentage in the world. The Octopus card, a contactless smart payment card, is widely accepted on railways, trams, buses and ferries, and can be used for payment in most retail stores.The Peak Tram, Hong Kong's first public transport system, has provided funicular rail transport between Central and Victoria Peak since 1888. The Central and Western District has an extensive system of escalators and moving pavements, including the Mid-Levels escalator (the world's longest outdoor covered escalator system). Hong Kong Tramways covers a portion of Hong Kong Island. The Mass Transit Railway (MTR) is an extensive passenger rail network, connecting 93 metro stations throughout the territory. With a daily ridership of almost five million, the system serves 41% of all public transit passengers in the city and has an on-time rate of 99.9%. Cross-boundary train service to Shenzhen is offered by the East Rail line, and longer-distance inter-city trains to Guangzhou, Shanghai, and Beijing are operated from Hung Hom station. Connecting service to the national high-speed rail system is provided at West Kowloon railway station.Although public transport systems handle most passenger traffic, there are over 500,000 private vehicles registered in Hong Kong. Automobiles drive on the left (unlike in mainland China), because of historical influence of the British Empire. Vehicle traffic is extremely congested in urban areas, exacerbated by limited space to expand roads and an increasing number of vehicles. More than 18,000 taxicabs, easily identifiable by their bright colour, are licensed to carry riders in the territory. Bus services operate more than 700 routes across the territory, with smaller public light buses (also known as minibuses) serving areas standard buses do not reach as frequently or directly. Highways, organised with the Hong Kong Strategic Route and Exit Number System, connect all major areas of the territory. The Hong Kong–Zhuhai–Macau Bridge provides a direct route to the western side of the Pearl River estuary. Hong Kong International Airport is the territory's primary airport. Over 100 airlines operate flights from the airport, including locally based Cathay Pacific (flag carrier), Hong Kong Airlines, low-cost airline HK Express and cargo airline Air Hong Kong. It is the eighth-busiest airport by passenger traffic pre-COVID and handles the most air-cargo traffic in the world. Most private recreational aviation traffic flies through Shek Kong Airfield, under the supervision of the Hong Kong Aviation Club.The Star Ferry operates two lines across Victoria Harbour for its 53,000 daily passengers. Ferries also serve outlying islands inaccessible by other means. Smaller kai-to boats serve the most remote coastal settlements. Ferry travel to Macau and mainland China is also available. Junks, once common in Hong Kong waters, are no longer widely available and are used privately and for tourism. The large size of the port gives Hong Kong the classification of Large-Port Metropolis. Utilities Hong Kong generates most of its electricity locally. The vast majority of this energy comes from fossil fuels, with 46% from coal and 47% from petroleum. The rest is from other imports, including nuclear energy generated in mainland China. Renewable sources account for a negligible amount of energy generated for the territory. Small-scale wind-power sources have been developed, and a small number of private homes and public buildings have installed solar panels.With few natural lakes and rivers, high population density, inaccessible groundwater sources, and extremely seasonal rainfall, the territory does not have a reliable source of freshwater. The Dongjiang River in Guangdong supplies 70% of the city's water, and the remaining demand is filled by harvesting rainwater. Toilets in most built-up areas of the territory flush with seawater, greatly reducing freshwater use.Broadband Internet access is widely available, with 92.6% of households connected. Connections over fibre-optic infrastructure are increasingly prevalent, contributing to the high regional average connection speed of 21.9 Mbit/s (the world's fourth-fastest). Mobile-phone use is ubiquitous; there are more than 18 million mobile-phone accounts, more than double the territory's population. Culture Hong Kong is characterised as a hybrid of East and West. Traditional Chinese values emphasising family and education blend with Western ideals, including economic liberty and the rule of law. Although the vast majority of the population is ethnically Chinese, Hong Kong has developed a distinct identity. The territory diverged from the mainland through its long period of colonial administration and a different pace of economic, social, and cultural development. Mainstream culture is derived from immigrants originating from various parts of China. This was influenced by British-style education, a separate political system, and the territory's rapid development during the late 20th century. Most migrants of that era fled poverty and war, reflected in the prevailing attitude toward wealth; Hongkongers tend to link self-image and decision-making to material benefits. Residents' sense of local identity has markedly increased post-handover: The majority of the population (52%) identifies as "Hongkongers", while 11% describe themselves as "Chinese". The remaining population purport mixed identities, 23% as "Hongkonger in China" and 12% as "Chinese in Hong Kong".Traditional Chinese family values, including family honour, filial piety, and a preference for sons, are prevalent. Nuclear families are the most common households, although multi-generational and extended families are not unusual. Spiritual concepts such as feng shui are observed; large-scale construction projects often hire consultants to ensure proper building positioning and layout. The degree of its adherence to feng shui is believed to determine the success of a business. Bagua mirrors are regularly used to deflect evil spirits, and buildings often lack floor numbers with a 4; the number has a similar sound to the word for "die" in Cantonese. Cuisine Food in Hong Kong is primarily based on Cantonese cuisine, despite the territory's exposure to foreign influences and its residents' varied origins. Rice is the staple food, and is usually served plain with other dishes. Freshness of ingredients is emphasised. Poultry and seafood are commonly sold live at wet markets, and ingredients are used as quickly as possible. There are five daily meals: breakfast, lunch, afternoon tea, dinner, and siu yeh. Dim sum, as part of yum cha (brunch), is a dining-out tradition with family and friends. Dishes include congee, cha siu bao, siu yuk, egg tarts, and mango pudding. Local versions of Western food are served at cha chaan teng (Hong Kong-style cafes). Common cha chaan teng menu items include macaroni in soup, deep-fried French toast, and Hong Kong-style milk tea. Cinema Hong Kong developed into a filmmaking hub during the late 1940s as a wave of Shanghai filmmakers migrated to the territory, and these movie veterans helped build the colony's entertainment industry over the next decade. By the 1960s, the city was well known to overseas audiences through films such as The World of Suzie Wong. When Bruce Lee's The Way of the Dragon was released in 1972, local productions became popular outside Hong Kong. During the 1980s, films such as A Better Tomorrow, As Tears Go By, and Zu Warriors from the Magic Mountain expanded global interest beyond martial arts films; locally made gangster films, romantic dramas, and supernatural fantasies became popular.Hong Kong cinema continued to be internationally successful over the following decade with critically acclaimed dramas such as Farewell My Concubine, To Live, and Chungking Express. The city's martial arts film roots are evident in the roles of the most prolific Hong Kong actors. Jackie Chan, Donnie Yen, Jet Li, Chow Yun-fat, and Michelle Yeoh frequently play action-oriented roles in foreign films. Hong Kong films have also grown popular in oversea markets such as Japan, South Korea, and Southeast Asia, earning the city the moniker "Hollywood of the East". At the height of the local movie industry in the early 1990s, over 400 films were produced each year; since then, industry momentum shifted to mainland China. The number of films produced annually has declined to about 60 in 2017. Music Cantopop is a genre of Cantonese popular music which emerged in Hong Kong during the 1970s. Evolving from Shanghai-style shidaiqu, it is also influenced by Cantonese opera and Western pop. Local media featured songs by artists such as Sam Hui, Anita Mui, Leslie Cheung, and Alan Tam; during the 1980s, exported films and shows exposed Cantopop to a global audience. The genre's popularity peaked in the 1990s, when the Four Heavenly Kings dominated Asian record charts. Despite a general decline since late in the decade, Cantopop remains dominant in Hong Kong; contemporary artists such as Eason Chan, Joey Yung, and Twins are popular in and beyond the territory.Western classical music has historically had a strong presence in Hong Kong and remains a large part of local musical education. The publicly funded Hong Kong Philharmonic Orchestra, the territory's oldest professional symphony orchestra, frequently hosts musicians and conductors from overseas. The Hong Kong Chinese Orchestra, composed of classical Chinese instruments, is the leading Chinese ensemble and plays a significant role in promoting traditional music in the community.Hong Kong has never had a separate national anthem to the country that controlled it; its current official national anthem is therefore that of China, March of the Volunteers. The song Glory to Hong Kong has been used by protestors as an unofficial anthem of the city. Sport and recreation Despite its small area, the territory is home to a variety of sports and recreational facilities. The city has hosted numerous major sporting events, including the 2009 East Asian Games, the 2008 Summer Olympics equestrian events, and the 2007 Premier League Asia Trophy. The territory regularly hosts the Hong Kong Sevens, Hong Kong Marathon, Hong Kong Tennis Classic and Lunar New Year Cup, and hosted the inaugural AFC Asian Cup and the 1995 Dynasty Cup.Hong Kong represents itself separately from mainland China, with its own sports teams in international competitions. The territory has participated in almost every Summer Olympics since 1952 and has earned nine medals. Lee Lai-shan won the territory's first Olympic gold medal at the 1996 Atlanta Olympics, and Cheung Ka Long won the second one in Tokyo 2020. Hong Kong athletes have won 126 medals at the Paralympic Games and 17 at the Commonwealth Games. No longer part of the Commonwealth of Nations, the city's last appearance in the latter was in 1994.Dragon boat races originated as a religious ceremony conducted during the annual Tuen Ng Festival. The race was revived as a modern sport as part of the Tourism Board's efforts to promote Hong Kong's image abroad. The first modern competition was organised in 1976, and overseas teams began competing in the first international race in 1993.The Hong Kong Jockey Club, the territory's largest taxpayer, has a monopoly on gambling and provides over 7% of government revenue. Three forms of gambling are legal in Hong Kong: lotteries, horse racing, and football. Education Education in Hong Kong is largely modelled on that of the United Kingdom, particularly the English system. Children are required to attend school from age 6 until completion of secondary education, generally at age 18. At the end of secondary schooling, all students take a public examination and are awarded the Hong Kong Diploma of Secondary Education upon successful completion.Of residents aged 15 and older, 81% completed lower-secondary education, 66% graduated from an upper secondary school, 32% attended a non-degree tertiary program, and 24% earned a bachelor's degree or higher.Mandatory education has contributed to an adult literacy rate of 95.7%. The literacy rate is lower than that of other developed economies because of the influx of refugees from mainland China during the post-war colonial era; much of the elderly population were not formally educated because of war and poverty.Comprehensive schools fall under three categories: public schools, which are government-run; subsidised schools, including government aid-and-grant schools; and private schools, often those run by religious organisations and that base admissions on academic merit. These schools are subject to the curriculum guidelines as provided by the Education Bureau. Private schools subsidised under the Direct Subsidy Scheme; international schools fall outside of this system and may elect to use differing curricula and teach using other languages. Medium of instruction At primary and secondary school levels, the government maintains a policy of "mother tongue instruction"; most schools use Cantonese as the medium of instruction, with written education in both Chinese and English. Other languages being used as medium of instruction in non-international school education include English and Putonghua (Standard Mandarin Chinese). Secondary schools emphasise "bi-literacy and tri-lingualism", which has encouraged the proliferation of spoken Mandarin language education.English is the official medium of instruction and assessments for most university programmes in Hong Kong, although use of Cantonese is predominant in informal discussions among local students and professors. Tertiary education Hong Kong has eleven universities. The University of Hong Kong (HKU) was founded as the city's first institute of higher education during the early colonial period in 1911. The Chinese University of Hong Kong (CUHK) was established in 1963 to fill the need for a university that taught using Chinese as its primary language of instruction. Along with the Hong Kong University of Science and Technology (HKUST) established in 1991, these universities are consistently ranked among the top 50 or top 100 universities worldwide.The Hong Kong Polytechnic University (PolyU) and City University of Hong Kong (CityU), both granted university status in 1994, are consistently ranked among the top 100 or top 200 universities worldwide. The Hong Kong Baptist University (HKBU) was granted university status in 1994 and is a liberal arts institution. Lingnan University, Education University of Hong Kong, Hong Kong Metropolitan University (formerly Open University of Hong Kong), Hong Kong Shue Yan University and Hang Seng University of Hong Kong all attained full university status in subsequent years. Media Most of the newsapapers in Hong Kong are written in Chinese but there are also a few English-language newspapers. The major one is the South China Morning Post, with The Standard serving as a business-oriented alternative. A variety of Chinese-language newspapers are published daily; the most prominent are Ming Pao and Oriental Daily News. Local publications are often politically affiliated, with pro-Beijing or pro-democracy sympathies. The central government has a print-media presence in the territory through the state-owned Ta Kung Pao and Wen Wei Po. Several international publications have regional operations in Hong Kong, including The Wall Street Journal, Financial Times, The New York Times International Edition, USA Today, Yomiuri Shimbun, and The Nikkei.Three free-to-air television broadcasters operate in the territory; TVB, HKTVE, and Hong Kong Open TV air eight digital channels. TVB, Hong Kong's dominant television network, has an 80% viewer share. Pay TV services operated by Cable TV Hong Kong and PCCW offer hundreds of additional channels and cater to a variety of audiences. RTHK is the public broadcaster, providing seven radio channels and three television channels. Ten non-domestic broadcasters air programming for the territory's foreign population. Access to media and information over the Internet is not subject to mainland Chinese regulations, including the Great Firewall, yet local control applies. See also Index of articles related to Hong Kong Outline of Hong Kong Notes References Citations Sources Print Legislation and case law Academic publications Institutional reports News and magazine articles Websites External links Hong Kong. The World Factbook. Central Intelligence Agency. Hong Kong from BBC News Key Development Forecasts for Hong Kong from International Futures Hong Kong in Transition (1995–2020), an open access photographic archive of recent Hong Kong historyGovernment GovHK Hong Kong SAR government portal Discover Hong Kong Official site of the tourism boardTrade World Bank Summary Trade Statistics Hong KongMaps Wikimedia Atlas of Hong Kong Geographic data related to Hong Kong at OpenStreetMap
east point light
The East Point Light, known as the Maurice River Light before 1913, is a lighthouse located in Heislerville, New Jersey on the Delaware Bay at the mouth of the Maurice River in Maurice River Township, Cumberland County, New Jersey, United States. The lighthouse was built in 1849 and is the second oldest in New Jersey, with only the Sandy Hook Light, which was built in 1764, being older. The light was inactive from 1941 and was nearly destroyed by fire in 1971. The light was reinstated by the United States Coast Guard in 1980. Exterior restoration was completed in 1999. It was added to the National Register of Historic Places on August 25, 1995 for its significance in engineering, maritime history, and transportation. It became part of the Maurice River Lighthouse and East Point Archeological District on October 30, 2015. The lighthouse was just recently fully restored, both the exterior and interior work was completed in 2017. The lighthouse is now both an active navigational aid and a year-round museum open to the public for tours and special events throughout the year. Status The light is said to be critically endangered due to erosion. Although local governments routinely shore up the property's perimeter, using 3,000 pounds (1,400 kg) sand bags and bulldozers, the lighthouse is a mere 40 yards (37 m) from the shore. There was four times the beach as revealed by 1940 aerial photos. During storms the surf is 10 yards (9.1 m) from its front steps. A rally to save the lighthouse was held in the fall of 2018. Since then more sandbags have been added, paid for by the State of New Jersey and using the sandbags available the sandbag seawall was rebuilt by coordinated efforts of both the Maurice River Township and Cumberland County Road Departments. A geotube system is planned to be installed the summer of 2019 by the State of New Jersey to help hold the point and protect the lighthouse until more lasting measures can be taken. See also List of lighthouses in New Jersey National Register of Historic Places listings in Cumberland County, New Jersey References Notes Citations External links Media related to Maurice River lighthouse at Wikimedia Commons Visiting East Point Lighthouse Archived March 4, 2016, at the Wayback Machine - New Jersey Lighthouse Society NPS - East Point Light at Historic light stations Archived 2007-02-03 at the Wayback Machine "Historic Light Station Information and Photography: New Jersey". United States Coast Guard Historian's Office. Archived from the original on May 1, 2017. East Point Light - from Lighthousefriends.com NASA Astronomy Picture of the Day: Photo at night (8 January 2012)
science communication
Science communication encompasses a wide range of activities that connect science and society. Common goals of science communication include informing non-experts about scientific findings, raising the public awareness of and interest in science, influencing people's attitudes and behaviors, informing public policy, and engaging with diverse communities to address societal problems. The term "science communication" generally refers to settings in which audiences are not experts on the scientific topic being discussed (outreach), though some authors categorize expert-to-expert communication ("inreach" such as publication in scientific journals) as a type of science communication. Examples of outreach include science journalism and health communication. Since science has political, moral, and legal implications, science communication can help bridge gaps between different stakeholders in public policy, industry, and civil society. Science communicators are a broad group of people: scientific experts, science journalists, science artists, medical professionals, nature center educators, science advisors for policymakers, and everyone else who communicates with the public about science. They often use entertainment and persuasion techniques including humour, storytelling, and metaphors to connect with their audience's values and interests.Science communication also exists as an interdisciplinary field of social science research on topics such as misinformation, public opinion of emerging technologies, and the politicization and polarization of science. For decades, science communication research has had only limited influence on science communication practice, and vice-versa, but both communities are increasingly attempting to bridge research and practice.Historically, academic scientists were discouraged from spending time on public outreach, but that has begun to change. Research funders have raised their expectations for researchers to have broader impacts beyond publication in academic journals. An increasing number of scientists, especially younger scholars, are expressing interest in engaging the public through social media and in-person events, though they still perceive significant institutional barriers to doing so.Science communication is closely related to the fields of informal science education, citizen science, and public engagement with science, and there is no general agreement on whether or how to distinguish them. Like other aspects of society, science communication is influenced by systemic inequalities that impact both inreach and outreach. Motivations Writing in 1987, Geoffery Thomas and John Durant advocated various reasons to increase public understanding of science, or scientific literacy. More trained engineers and scientists could allow a nation to be more competitive economically.: 11–17  Science can also benefit individuals. Science can simply have aesthetic appeal (e.g., popular science or science fiction). Living in an increasingly technological society, background scientific knowledge can help to negotiate it. The science of happiness is an example of a field whose research can have direct and obvious implications for individuals. Governments and societies might also benefit from more scientific literacy, since an informed electorate promotes a more democratic society. Moreover, science can inform moral decision making (e.g., answering questions about whether animals can feel pain, how human activity influences climate, or even a science of morality).In 1990, Steven Hilgartner, a scholar in science and technology studies, criticized some academic research in public understanding of science. Hilgartner argued that what he called "the dominant view" of science popularization tends to imply a tight boundary around those who can articulate true, reliable knowledge. By defining a "deficient public" as recipients of knowledge, the scientists get to emphasize their own identity as experts, according to Hilgartner. Understood in this way, science communication may explicitly exist to connect scientists with the rest of society, but science communication may reinforce the boundary between the public and the experts (according to work by Brian Wynne in 1992 and Massimiano Bucchi in 1998). In 2016, the scholarly journal Public Understanding of Science ran an essay competition on the "deficit model" or "deficit concept" of science communication and published a series of articles answering the question "In science communication, why does the idea of a public deficit always return?" in different ways; for example, Carina Cortassa's essay argued that the deficit model of science communication is just a special case of an omnipresent problem studied in social epistemology of testimony, the problem of "epistemic asymmetry", which arises whenever some people know more about some things than other people. Science communication is just one kind of attempt to reduce epistemic asymmetry between people who may know more and people who may know less about a certain subject.Biologist Randy Olson said in 2009 that anti-science groups can often be so motivated, and so well funded, that the impartiality of science organizations in politics can lead to crises of public understanding of science. He cited examples of denialism (for instance, climate change denial) to support this worry. Journalist Robert Krulwich likewise argued in 2008 that the stories scientists tell compete with the efforts of people such as Turkish creationist Adnan Oktar. Krulwich explained that attractive, easy to read, and cheap creationist textbooks were sold by the thousands to schools in Turkey (despite their strong secular tradition) due to the efforts of Oktar. Astrobiologist David Morrison has spoken of repeated disruption of his work by popular anti-scientific phenomena, having been called upon to assuage public fears of an impending cataclysm involving an unseen planetary object—first in 2008, and again in 2012 and 2017. Methods Science popularization figures such as Carl Sagan and Neil deGrasse Tyson are partly responsible for the view of science or a specific science discipline within the general public. However, the degree of knowledge and experience a science popularizer has can vary greatly. Because of this, some science communication can depend on sensationalism. As a Forbes contributor put it, "The main job of physics popularizers is the same as it is for any celebrity: get more famous." Another point in the controversy of popular science is the idea of how public debate can affect public opinion. A relevant and highly public example of this is climate change. A science communication study appearing in The New York Times proves that "even a fractious minority wields enough power to skew a reader's perception of a [science news] story" and that even "firmly worded (but not uncivil) disagreements between commenters affected readers' perception of science." This causes some to worry about the popularizing of science in the public, questioning whether the further popularization of science will cause pressure towards generalization or sensationalism.Marine biologist and film-maker Randy Olson published Don't Be Such a Scientist: Talking Substance in an Age of Style. In the book he describes how there has been an unproductive negligence when it comes to teaching scientists to communicate. Don't be Such a Scientist is written to his fellow scientists, and he says they need to "lighten up". He adds that scientists are ultimately the most responsible for promoting and explaining science to the public and media. This, Olson says, should be done according to a good grasp of social science; scientists must use persuasive and effective means like story telling. Olson acknowledges that the stories told by scientists need not only be compelling but also accurate to modern science—and says this added challenge must simply be confronted. He points to figures like Carl Sagan as effective popularizers, partly because such figures actively cultivate a likeable image. At his commencement address to Caltech students, journalist Robert Krulwich delivered a speech entitled "Tell me a story". Krulwich says that scientists are actually given many opportunities to explain something interesting about science or their work, and that they must seize such opportunities. He says scientists must resist shunning the public, as Sir Isaac Newton did in his writing, and instead embrace metaphors the way Galileo did; Krulwich suggests that metaphors only become more important as the science gets more difficult to understand. He adds that telling stories of science in practice, of scientists' success stories and struggles, helps convey that scientists are real people. Finally, Krulwich advocates for the importance of scientific values in general, and helping the public to understand that scientific views are not mere opinions, but hard-won knowledge.Actor Alan Alda helped scientists and PhD students get more comfortable with communication with the help of drama coaches (they use the acting techniques of Viola Spolin).Matthew Nisbet described the use of opinion leaders as intermediaries between scientists and the public as a way to reach the public via trained individuals who are more closely engaged with their communities, such as "teachers, business leaders, attorneys, policymakers, neighborhood leaders, students, and media professionals". Examples of initiatives that have taken this approach include Science & Engineering Ambassadors, sponsored by the National Academy of Sciences, and Science Booster Clubs, coordinated by the National Center for Science Education. Evidence based practices Similar to how evidence-based medicine gained a foothold in medical communication decades ago, researchers Eric Jensen and Alexander Gerber have argued that science communication would benefit from evidence-based prescriptions since the field faces related challenges. In particular, they argued that the lack of collaboration between researchers and practitioners is a problem: "Ironically, the challenges begin with communication about science communication evidence.": 2 The overall effectiveness of the science communication field is limited by the lack of effective transfer mechanisms for practitioners to apply research in their work and perhaps even investigate, together with researchers, communication strategies, Jensen and Gerber said. Closer collaboration could enrich the spectrum of science communication research and increase the existing methodological toolbox, including more longitudinal and experimental studies.Evidence-based science communication would combine the best available evidence from systematic research, underpinned by established theory, as well as practitioners' acquired skills and expertise, reducing the double-disconnect between scholarship and practice. Neither adequately take into account the other side's priorities, needs and possible solutions, Jensen and Gerber argued; bridging the gap and fostering closer collaboration could allow for mutual learning, enhancing the overall advancements of science communication as a young field. Imagining science's publics In the preface of The Selfish Gene, Richard Dawkins wrote: "Three imaginary readers looked over my shoulder while I was writing, and I now dedicate the book to them. [...] First the general reader, the layman [...] second the expert [and] third the student". Many criticisms of the public understanding of science movement have emphasized that this thing they were calling the public was somewhat of an (unhelpful) black box. Approaches to the public changed with the move away from the public understanding of science. Science communication researchers and practitioners now often showcase their desire to listen to non-scientists as well as acknowledging an awareness of the fluid and complex nature of (post/late) modern social identities. At the very least, people will use plurals: publics or audiences. As the editor of the scholarly journal Public Understanding of Science put it in a special issue on publics: We have clearly moved from the old days of the deficit frame and thinking of publics as monolithic to viewing publics as active, knowledgeable, playing multiple roles, receiving as well as shaping science. (Einsiedel, 2007: 5) However, Einsiedel goes on to suggest both views of the public are "monolithic" in their own way; they both choose to declare what something called the public is. Some promoters of public understanding of science might have ridiculed publics for their ignorance, but an alternative "public engagement with science and technology" romanticizes its publics for their participatory instincts, intrinsic morality or simple collective wisdom. As Susanna Hornig Priest concluded in her 2009 introduction essay on science's contemporary audiences, the job of science communication might be to help non-scientists feel they are not excluded as opposed to always included; that they can join in if they want, rather than that there is a necessity to spend their lives engaging.The process of quantifiably surveying public opinion of science is now largely associated with the public understanding of science movement (some would say unfairly). In the US, Jon Miller is the name most associated with such work and well known for differentiating between identifiable "attentive" or "interested" publics (that is to say science fans) and those who do not care much about science and technology. Miller's work questioned whether the American public had the following four attributes of scientific literacy: knowledge of basic textbook scientific factual knowledge an understanding of scientific method appreciated the positive outcomes of science and technology rejected superstitious beliefs, such as astrology or numerologyIn some respects, John Durant's work surveying British public applied similar ideas to Miller. However, they were slightly more concerned with attitudes to science and technology, rather than just how much knowledge people had. They also looked at public confidence in their knowledge, considering issues such as the gender of those ticking "don't know" boxes. We can see aspects of this approach, as well as a more "public engagement with science and technology" influenced one, reflected within the Eurobarometer studies of public opinion. These have been running since 1973 to monitor public opinion in the member states, with the aim of helping the preparation of policy (and evaluation of policy). They look at a host of topics, not just science and technology but also defense, the euro, enlargement of the European Union, and culture. Eurobarometer's 2008 study of Europeans' Attitudes to Climate Change is a good example. It focuses on respondents' "subjective level of information"; asking "personally, do you think that you are well informed or not about...?" rather than checking what people knew. Frame analysis Science communication can be analyzed through frame analysis, a research method used to analyze how people understand situations and activities. Some features of this analysis are listed below. Public accountability: placing a blame on public actions for value, e.g. political gain in the climate change debate Runaway technology: creating a certain view of technological advancements, e.g. photos of an exploded nuclear power plant Scientific uncertainty: questioning the reliability of a scientific theory, e.g. arguing how bad global climate change can be if humans are still alive Heuristics People make an enormous number of decisions every day, and to approach all of them in a careful, methodical manner is impractical. They therefore often use mental shortcuts known as "heuristics" to quickly arrive at acceptable inferences. Tversky and Kahneman originally proposed three heuristics, listed below, although there are many others that have been discussed in later research. Representativeness: used to make assumptions about probability based on relevancy, e.g. how likely item A is to be a member of category B (is Kim a chef?), or that event C resulted from process D (could the sequence of coin tosses H-H-T-T have occurred randomly?). Availability: used to estimate how frequent or likely an event is based on how quickly one can conjure examples of the event. For example, if one were asked to approximate the number of people in your age group that are currently in college, your judgment would be affected by how many of your own acquaintances are in college. Anchoring and adjustment: used when making judgments with uncertainties. One will start with an anchoring point, then adjust it to reach an assumption. For example, if you are asked to estimate how many people will take Dr. Smith's biology class this spring, you may recall that 38 students took the class in the fall, and adjust your estimation based on whether the class is more popular in the spring or in the fall.The most effective science communication efforts take into account the role that heuristics play in everyday decision-making. Many outreach initiatives focus solely on increasing the public's knowledge, but studies have found little, if any, correlation between knowledge levels and attitudes towards scientific issues. Inclusive communication and cultural differences Inclusive science communication seeks to build equity by prioritizing communication that is built with and for marginalized groups that are not reached through typical top-down science communication.Science communication is affected by the same implicit inequities embedded in the production of science research. It has traditionally centered Western science and communicated in Western language. Māori researcher Linda Tuhiwai Smith details how scientific research is "inextricably linked to European imperialism and colonialism". The field's focus on Western science results in publicizing "discoveries" by Western scientists that have been known to Indigenous scientists and communities for generations, continuing the cycle of colonial exploitation of physical and intellectual resources. Collin Bjork notes that science communication is linked to oppression because European colonizers "employed both the English language and western science as tools for subjugating others". Today, English is still considered the international language of science and 80% of science journals in Scopus are published in English. As a result, most science journalism also communicates in English or must use English sources, limiting the audience that science communication can reach.Just as science has historically excluded communities of Black, Indigenous and people of color, LGBTQ+ communities and communities of lower socioeconomic status or education, science communication has also failed to center these audiences. Science communication cannot be inclusive or effective if these communities are not involved in both the creation and dissemination of science information. One strategy to improve inclusivity in science communication is by building philanthropic coalitions with marginalized communities.The 2018 article titled "The Civic Science Imperative" in the Stanford Social Innovation Review (SSIR) outlined how civic science could expand inclusion in science and science communication. Civic science fosters public engagement with science issues so citizens can spur meaningful policy, societal or democratic change. This article outlined the strategies of supporting effective science communication and engagement, building diverse coalitions, building flexibility to meet changing goals, centering shared values, and using research and feedback loops to increase trust. However, the authors of the 2020 SSIR article "How Science Philanthropy Can Build Equity" warned that these approaches will not combat systemic barriers of racism, sexism, ableism, xenophobia or classism without the principles of diversity, equity and inclusion (DEI).DEI in science communication can take many forms, but will always: include marginalized groups in the goal setting, design and implementation of the science communication; use experts to determine the unique values, needs and communication style of the community being reached; test to determine the best way to reach each segment of a community; and include ways to mitigate harm or stress for community members who engage with this work.Efforts to make science communication more inclusive can focus on a global, national or local community. The Metcalf Institute for Marine & Environmental Reporting at the University of Rhode Island produced a survey of these practices in 2020. "How Science Philanthropy Can Build Equity" also lists several successful civic science projects and approaches. Complementary methods for including diverse voices include the use of poetry, participatory arts, film, and games, all of which have been used to engage various publics by monitoring, deliberating, and responding to their attitudes toward science and scientific discourse. Science in popular culture and the media Birth of public science While scientific study began to emerge as a popular discourse following the Renaissance and the Enlightenment, science was not widely funded or exposed to the public until the nineteenth century. Most science prior to this was funded by individuals under private patronage and was studied in exclusive groups, like the Royal Society. Public science emerged due to a gradual social change, resulting from the rise of the middle class in the nineteenth century. As scientific inventions, like the conveyor belt and the steam locomotive entered and enhanced the lifestyle of people in the nineteenth century, scientific inventions began to be widely funded by universities and other public institutions in an effort to increase scientific research. Since scientific achievements were beneficial to society, the pursuit of scientific knowledge resulted in science as a profession. Scientific institutions, like the National Academy of Sciences or the British Association for the Advancement of Science are examples of leading platforms for the public discussion of science. David Brewster, founder of the British Association for the Advancement of Science, believed in regulated publications in order to effectively communicate their discoveries, "so that scientific students may know where to begin their labours." As the communication of science reached a wider audience, due to the professionalization of science and its introduction to the public sphere, the interest in the subject increased. Scientific media in the 19th century There was a change in media production in the nineteenth century. The invention of the steam-powered printing press enabled more pages to be printed per hour, which resulted in cheaper texts. Book prices gradually dropped, which gave the working classes the ability to purchase them. No longer reserved for the elite, affordable and informative texts were made available to a mass audience. Historian Aileen Fyfe noted that, as the nineteenth century experienced a set of social reforms that sought to improve the lives of those in the working classes, the availability of public knowledge was valuable for intellectual growth. As a result, there were reform efforts to further the knowledge of the less educated. The Society for the Diffusion of Useful Knowledge, led by Henry Brougham, attempted to organize a system for widespread literacy for all classes. Additionally, weekly periodicals, like the Penny Magazine, were aimed to educate the general public on scientific achievements in a comprehensive manner. As the audience for scientific texts expanded, the interest in public science did as well. "Extension lectures" were installed in some universities, like Oxford and Cambridge, which encouraged members of the public to attend lectures. In America, traveling lectures were a common occurrence in the nineteenth century and attracted hundreds of viewers. These public lectures were a part of the lyceum movement and demonstrated basic scientific experiments, which advanced scientific knowledge for both the educated and uneducated viewers.Not only did the popularization of public science enlighten the general public through mass media, but it also enhanced communication within the scientific community. Although scientists had been communicating their discoveries and achievements through print for centuries, publications with a variety of subjects decreased in popularity. Alternatively, publications in discipline-specific journals were crucial for a successful career in the sciences in the nineteenth century. As a result, scientific journals such as Nature or National Geographic possessed a large readership and received substantial funding by the end of the nineteenth century as the popularization of science continued. Science communication in contemporary media Science can be communicated to the public in many different ways. According to Karen Bultitude, a science communication lecturer at University College London, these can be broadly categorized into three groups: traditional journalism, live or face-to-face events, and online interaction. Traditional journalism Traditional journalism (for example, newspapers, magazines, television and radio) has the advantage of reaching large audiences; in the past, this is way most people regularly accessed information about science. Traditional media is also more likely to produce information that is high quality (well written or presented), as it will have been produced by professional journalists. Traditional journalism is often also responsible for setting agendas and having an impact on government policy. The traditional journalistic method of communication is one-way, so there can be no dialogue with the public, and science stories can often be reduced in scope so that there is a limited focus for a mainstream audience, who may not be able to comprehend the bigger picture from a scientific perspective. However, there is new research now available on the role of newspapers and television channels in constituting "scientific public spheres" which enable participation of a wide range of actors in public deliberations.Another disadvantage of traditional journalism is that, once a science story is taken up by mainstream media, the scientist(s) involved no longer has any direct control over how his or her work is communicated, which may lead to misunderstanding or misinformation. Research in this area demonstrates how the relationship between journalists and scientists has been strained in some instances. On one hand scientists have reported being frustrated with things like journalists oversimplifying or dramatizing of their work, while on the other hand journalists find scientists difficult to work with and ill-equipped to communicate their work to a general audience. Despite this potential tension, a comparison of scientists from several countries has shown that many scientists are pleased with their media interactions and engage often.However, the use of traditional media sources, like newspapers and television, has steadily declined as primary sources for science information, while the internet has rapidly increased in prominence. In 2016, 55% of Americans reported using the internet as their primary source to learn about science and technology, compared to 24% reporting TV and 4% reporting newspapers were their primary sources. Additionally, traditional media outlets have dramatically decreased the number of, or in some cases eliminated, science journalists and the amount of science-related content they publish. Live or face-to-face events The second category is live or face-to-face events, such as public lectures in museums or universities, debates, science busking, "sci-art" exhibits, Science Cafés and science festivals. Citizen science or crowd-sourced science (scientific research conducted, in whole or in part, by amateur or nonprofessional scientists) can be done with a face-to-face approach, online, or as a combination of the two to engage in science communication. Research has shown that members of the public seek out science information that is entertaining, but also helping citizens to critically participate in risk regulation and S&T governance. Therefore, it is important to bear this aspect in mind when communicating scientific information to the public (for example, through events combining science communication and comedy, such as Festival of the Spoken Nerd, or during scientific controversies). The advantages of this approach are that it is more personal and allows scientists to interact with the public, allowing for two-way dialogue. Scientists are also better able to control content using this method. Disadvantages of this method include the limited reach, it can also be resource-intensive and costly and also, it may be that only audiences with an existing interest in science will be attracted. Online interaction The third category is online interaction; for example, websites, blogs, wikis and podcasts can be used for science communication, as can other social media or forms of artificial intelligence like AI-Chatbots. Online methods of communicating science have the potential to reach huge audiences, can allow direct interaction between scientists and the public, and the content is always accessible and can be somewhat controlled by the scientist. Additionally, online communication of science can help boost scientists' reputation through increased citations, better circulation of articles, and establishing new collaborations. Online communication also allows for both one-way and two-way communication, depending on the audience's and the author's preferences. However, there are disadvantages in that it is difficult to control how content is picked up by others, and regular attention and updating is needed.When considering whether or not to engage in science communication online, scientists should review what science communication research has shown to be the potential positive and negative outcomes. Online communication has given rise to movements like open science, which advocates for making science more accessible. However, when engaging in communication about science online, scientists should consider not publicizing or reporting findings from their research until it has been peer-reviewed and published, as journals may not accept the work after it has been circulated under the "Ingelfinger rule". Other considerations revolve around how scientists will be perceived by other scientists for engaging in communication. For example, some scholars have criticized engaged, popular scholars using concepts like the Sagan effect or Kardashian Index. Despite these criticisms, many scientists are taking to communicating their work on online platforms, a sign of potentially changing norms in the field. Art According to Lesen et al. (2016), art has been a tool increasingly used to attract the public to science. Either formally or in an informal context, an integration between artists and scientists could potentially raise awareness of the general public about current topics in science, technology, engineering and mathematics (STEM). The arts have the power of creating emotional links between the public and a research topic and create a collaborative atmosphere that can "activate science" in a different way. Learning through the affection domain, in contrast to the cognitive domain, increases motivation and using the arts to communicate scientific knowledge this way could increase dramatically engagement. Social media science communication By using Twitter, scientists and science communicators can discuss scientific topics with many types of audiences with various points of view. Studies published in 2012 by Gunther Eysenbach shed light on how Twitter not only communicates science to the public but also affects advances in the science community.Alison Bert, editor in chief of Elsevier Connect, wrote a 2014 news article titled "How to use social media for science" that reported on a panel about social media at that year's AAAS meeting, in which panelists Maggie Koerth-Baker, Kim Cobb, and Danielle N. Lee noted some potential benefits and drawbacks to scientists of sharing their research on Twitter. Koerth-Baker, for example, commented on the importance of keeping public and private personas on social media separate in order to maintain professionalism online.Interviewed in 2014, Karen Peterson, director of Scientific Career Development at Fred Hutchinson Cancer Research Center stressed the importance for scientists of using social networks such as Facebook and Twitter to establish an online presence.Kimberly Collins et al., writing in PLOS One in 2016, explained reasons why some scientists were hesitant to join Twitter. Some scientists were hesitant to use social media outlets such as Twitter due to lack of knowledge of the platform, and inexperience with how to make meaningful posts. Some scientists did not see the meaning in using Twitter as a platform to share their research or have the time to add the information into the accounts themselves.In 2016, Elena Milani created the SciHashtag Project, which is a condensed collection of Twitter hashtags about science communication.In 2017, a study done by the Pew Research Center found that about "a quarter of social media users (26%) follow science accounts" on social media. This group of users "places both more importance and comparatively more trust on science news that comes to them through social media".Scientists have also used other social media platforms, including Instagram and Reddit, to establish a connection with the public and discuss science. The public understanding of science movement "Public understanding of science", "public awareness of science" and "public engagement with science and technology" are all terms coined with a movement involving governments and societies in the late 20th century. During the late 19th century, science became a professional subject and influenced by governmental suggestions. Prior to this, public understanding of science was very low on the agenda. However, some well-known figures such as Michael Faraday ran lectures aimed at the non-expert public, his being the famous Christmas Lectures which began in 1825. The 20th century saw groups founded on the basis they could position science in a broader cultural context and allow scientists to communicate their knowledge in a way that could reach and be understood by the general public. In the UK, The Bodmer Report (or The Public Understanding of Science as it is more formally known) published in 1985 by The Royal Society changed the way scientists communicated their work to the public. The report was designed to "review the nature and extent of the public understanding of science in the United Kingdom and its adequacy for an advanced democracy".: 5–7  Chaired by the geneticist Sir Walter Bodmer alongside famous scientists as well as broadcaster Sir David Attenborough, the report was evidenced by all of the major sectors concerned; scientists, politicians, journalists and industrialists but not the general public.: 5–7  One of the main assumptions drawn from the report was everybody should have some grasp of science and this should be introduced from a young age by teachers who are suitably qualified in the subject area. The report also asked for further media coverage of science including via newspapers and television which has ultimately led to the establishment of platforms such as the Vega Science Trust. In both the UK and the United States following the second world war, public views of scientists swayed from great praise to resentment. Therefore, the Bodmer Report highlighted concerns from the scientific community that their withdrawal from society was causing scientific research funding to be weak. Bodmer promoted the communication of science to a wider more general public by expressing to British scientists that it was their responsibility to publicize their research. An upshot of the publication of the report was the creation of the Committee on the Public Understanding of Science (COPUS), a collaboration between the British Association for the Advancement of Science, the Royal Society and the Royal Institution. The engagement between these individual societies caused the necessity for a public understanding of science movement to be taken seriously. COPUS also awarded grants for specific outreach activities allowing the public understanding to come to the fore. Ultimately leading to a cultural shift in the way scientists publicized their work to the wider non-expert community. Although COPUS no longer exists within the UK the name has been adopted in the US by the Coalition on the Public Understanding of Science. An organization which is funded by the US National Academy of Sciences and the National Science Foundation and focuses on popular science projects such as science cafes, festivals, magazines and citizen science schemes. In the European Union, public views on public-funded research and the role of governmental institutions in funding scientific activities were being questioned as the budget allocated was increasing. Therefore, the European Commission encouraged strongly and later obligated research organizations to communicate about their research activities and results widely and to the general public. This is being done by integrating a communication plan into their research project that increases the public visibility of the project using an accessible language and adapted channels and materials. See also Conversazione Hype in science List of science communicators Public awareness of science Science-to-business marketing Notes and references Further reading Bauer, M & Bucchi, M (eds) (2007). Journalism, Science and Society (London & New York: Routledge). Bucchi, M & Trench, B (eds) (2014). Handbook of Public Communication of Science and Technology (2nd ed.) (London & New York: Routledge). Cartwright, JH & Baker, B (2005). Literature and Science: Social Impact and Interaction (Santa Barbara: ABC-CLIO). Drake, JL et al. (eds) (2013). New Trends in Earth-Science Outreach and Engagement: The Nature of Communication (Cham, Switzerland: Springer). Fortenberry, RC (2018). Complete Science Communication: A Guide to Connecting with Scientists, Journalists and the Public (London: Royal Society of Chemistry). Gregory, J & Miller, S (1998). Science in Public: Communication, Culture and Credibility (New York: Plenum). Holliman, R et al. (eds) (2009). Investigating Science Communication in the Information Age: Implications for Public Engagement and Popular Media (Oxford: Oxford University Press). National Academies of Sciences, Engineering, and Medicine (2016). Communicating Science Effectively: A Research Agenda (Washington, DC: The National Academies Press). doi:10.17226/23674 Nelkin, D (1995). Selling Science: How the Press Covers Science & Technology, 2nd edition (New York: WH Freeman). Wilson, A et al. (eds.) (1998). Handbook of Science Communication (Bristol; Philadelphia: Institute of Physics).
duarte costa
Duarte Costa (born 22 June 1988) is a Portuguese politician and co-chair of the party Volt Portugal, as well as a climate expert. Costa is his party's top candidate for the 2024 European elections. Career Duarte Costa was born in Lisbon in 1988 and studied geography at the University of Lisbon from 2006 to 2009. Costa graduated from the University of Sussex with a Master's degree in Climate Change and Policy. Convinced that science should have a stronger influence on political decisions, he joined the party Volt. Costa was the top candidate for the European constituency in the 2022 Portuguese legislative elections. In June 2022, Costa together with Ana Carvalho were elected Co-Chairs of Volt Portugal. In the list election for the 2024 European Parliament election, he was nominated as the top candidate of the list together with Rhia Lopes. Political positions Transport policy Costa repeatedly spoke out in favour of expanding public transport and sustainable transport infrastructure more quickly in order to support Portugal's climate protection efforts. For example, he advocated limiting fuel rebates to vulnerable groups at the start of the Ukraine war in order to use the funds thus freed up to support the expansion of carbon-neutral mobility systems. Costa wants to significantly expand the rail network and link Portugal more closely with Spain and the rest of Europe with high-speed railway lines. Urban planning should include links with public transport from the outset, and housing construction should be geared towards this, following the example of Vienna. Climate protection To be in line with the IPCC report's requirements for climate action, Costa called for more ambitious measures and advocated for the EU to aim for carbon neutrality by 2040 instead of 2050, and to reduce emissions by 80% by 2030 instead of 55% as predicted. Urban planning should also be adapted to changing conditions due to climate change, with increasing heat waves and forest fires, and the energy efficiency of houses should be increased. To this end, more climate-friendly forms of mobility are to be promoted. Equality and LGBTQ rights Duarte Costa has repeatedly spoken out against racism, homophobia, misogyny and transphobia and regularly participates in Christopher Street Days. European policy By introducing transnational lists in European elections, Costa wants to deepen European democracy and bring citizens closer to the EU.In order to make the EU more capable of acting, Costa also spoke out in favour of abolishing the right of veto in the European Council and proposes decision-making by qualified majority instead. In the future, the European Union should become a parliamentary democracy with a government elected by the EU Parliament. == References ==
maibach
Maibach may refer to: Places Maibach (Poppenhausen), a locality of Poppenhausen, in Bavaria, Germany Maibach (Butzbach), a borough of Butzbach, in Hesse, Germany People with the surname Howard Maibach (born 1929), American dermatologist Edward Maibach, an expert in public health and climate change communication Other uses Maibach (Axtbach), a river of North Rhine-Westphalia, Germany See also Maybach
claude turmes
Claude Turmes (born 26 November 1960) is a Luxembourgish politician who served as a Member of the European Parliament (MEP) from 2009 until 2018. He is a member of the Green Party, part of the European Green Party. Turmes was elected as a member of the European Parliament in the 1999 European elections. In parliament, he first served on the Committee on Budgetary Control before joining the Committee on Industry, Research and Energy in 2002. In this capacity, he served as rapporteur on the 2008 draft of the EU Renewable Energy Directive 2009/28/EC and on the EU Energy Efficiency Directive 2012. Between 2007 and 2008, he was a member of the Temporary Committee on Climate Change. He also represented the Parliament at the 2008 United Nations Climate Change Conference in Poznań, and the 2016 United Nations Climate Change Conference in Marrakesh.In 2011, Turmes was part of a cross-party working group headed by Jerzy Buzek, the President of the European Parliament, to draft reforms on lobbying and MEPs’ rules of conduct. In addition to his committee assignments, Turmes was a member of the European Parliament Intergroup on LGBT Rights and of the European Parliament Intergroup on the Welfare and Conservation of Animals.He became Secretary of State for Sustainable Development and Infrastructures for the Luxembourg government in June 2018, and was appointed Minister for Energy and Minister for Spatial Planning on 5 December 2018 until 2023. Other activities Energy Watch Group (EWG), Member European Forum for Renewable Energy Sources (EUFORES), President Agora Energiewende, Member of the Council References External links Official website Archived 22 February 2012 at the Wayback Machine Personal profile of Claude Turmes in the European Parliament's database of members
united nations convention to combat desertification
The United Nations Convention to Combat Desertification in Those Countries Experiencing Serious Drought and/or Desertification, Particularly in Africa (UNCCD) is a Convention to combat desertification and mitigate the effects of drought through national action programs that incorporate long-term strategies supported by international cooperation and partnership arrangements. The Convention, the only convention stemming from a direct recommendation of the Rio Conference's Agenda 21, was adopted in Paris, France, on 17 June 1994 and entered into force in December 1996. It is the only internationally legally binding framework set up to address the problem of desertification. The Convention is based on the principles of participation, partnership and decentralization—the backbone of good governance and sustainable development. It has 197 parties, making it near universal in reach. To help publicise the Convention, 2006 was declared "International Year of Deserts and Desertification" but debates have ensued regarding how effective the International Year was in practice.Ibrahim Thiaw was appointed as Under Secretary General of the United Nations and UNCCD Executive Secretary on 31 January 2019. States Parties The UNCCD has been ratified by the European Union and 196 states: all 193 UN member states, the Cook Islands, Niue, and the State of Palestine.On 28 March 2013, Canada became the first country to withdraw from the convention. However, three years later, Canada reversed its withdrawal by re-acceding to the convention on 21 December 2016, which resulted in Canada becoming party to the convention again on 21 March 2017.The Holy See (Vatican City) is the only state that is not a party to the convention that is eligible to accede to it. Secretariat The permanent Secretariat of the UNCCD was established during the first Conference of the parties (COP 1) held in Rome in 1997. It has been located in Bonn, Germany, since January 1999, and moved from its first Bonn address in Haus Carstanjen to the new UN Campus in July 2006. The functions of the secretariat are to make arrangements for sessions of the Conference of the Parties (COP) and its subsidiary bodies established under the Convention, and to provide them with services as required. One key task of the secretariat is to compile and transmit reports submitted to it. The secretariat also provides assistance to affected developing country parties, particularly those in Africa. This is important when compiling information and reports required under the Convention. UNCCD activities are coordinated with the secretariats of other relevant international bodies and conventions, like those of the UN Framework Convention on Climate Change (UNFCCC) and the Convention on Biological Diversity (CBD). Conference of the Parties The Conference of the Parties (COP) oversees the implementation of the Convention. It is established by the Convention as the supreme decision-making body, and it comprises all ratifying governments. The first five sessions of the COP were held annually from 1997 to 2001. Starting 2001 sessions are held on a biennial basis interchanging with the sessions of the Committee for the Review of the Implementation of the Convention (CRIC), whose first session was held in 2002. List of COP Committee on Science and Technology The UN Convention to Combat Desertification has established a Committee on Science and Technology (CST). The CST was established under Article 24 of the Convention as a subsidiary body of the COP, and its mandate and terms of reference were defined and adopted during the first session of the Conference of the Parties in 1997. It is composed of government representatives competent in the fields of expertise relevant to combating desertification and mitigating the effects of drought. The committee identifies priorities for research, and recommends ways of strengthening cooperation among researchers. It is multi-disciplinary and open to the participation of all Parties. It meets in conjunction with the ordinary sessions of the COP. The CST collects, analyses and reviews relevant data. It also promotes cooperation in the field of combating desertification and mitigating the effects of drought through appropriate sub-regional, regional and national institutions, and in particular by its activities in research and development, which contribute to increased knowledge of the processes leading to desertification and drought as well as their impact. The Bureau of the CST is composed of the Chairperson and the four Vice-Chairpersons. The chairman is elected by the Conference of the Parties at each of its sessions with due regard to ensure geographical distribution and adequate representation of affected Country Parties, particularly those in Africa, who shall not serve for more than two consecutive terms. The Bureau of the CST is responsible for the follow-up of the work of the Committee between sessions of the COP and may benefit from the assistance of ad hoc panels established by the COP. The CST also contributes to distinguishing causal factors, both natural and human, with a view to combating desertification and achieving improved productivity as well as the sustainable use and management of resources. Under the authority of the CST, a Group of Experts was established by the COP with a specific work programme, to assist in improving the efficiency and effectiveness of the CST. This Group of Experts, working under the authority of the CST, provides advice on the areas of drought and desertification. Group of Experts The Group of Experts (GoE) plays an important institutional role, providing the CST with information on the current knowledge, the extent and the impact, the possible scenarios and the policy implications on various themes assigned in its work programme. The results of the work performed by the GoE are widely recognized and include dissemination of its results on ongoing activities (benchmarks and indicators, traditional knowledge, early warning systems). The Group of Experts develops and makes available to all interested people information on appropriate mechanisms for scientific and technological cooperation and articulates research projects, which promote awareness about desertification and drought between countries and stakeholders at the international, regional and national level. The Group of Experts seeks to build on and use existing work and evidence to produce pertinent synthesis and outputs for the use of the Parties to the Convention and for the broader dissemination to the scientific community. The programme of work and its mandate is pluri-annual in nature, for a maximum of four years. National, regional and sub-regional programmes National Action Programmes (NAP) are one of the key instruments in the implementation of the Convention. They are strengthened by Action Programmes on Sub-regional (SRAP) and Regional (RAP) level. National Action Programmes are developed in the framework of a participative approach involving the local communities and they spell out the practical steps and measures to be taken to combat desertification in specific ecosystems. See also Action for Climate Empowerment Earth Summit Economics of Land Degradation Initiative Hama Arba Diallo International Year of Deserts and Desertification List of international environmental agreements Terrafrica partnership United Nations Framework Convention on Climate Change (UNFCCC) World Day to Combat Desertification and Drought References This article incorporates public domain material from The World Factbook (2023 ed.). CIA. (Archived 2003 edition) Full text available from UNCCD.int Rechkemmer, Andreas (2004): Postmodern Global Governance. The United Nations Convention to Combat Desertification. Baden-Baden: Nomos Verlag. External links UNCCD official website 2006: International Year of Deserts and Desertification The Economics of Land Degradation Initiative - Homepage UNESCO Water Portal: UNCCD
david g. barber
David George Barber, (28 November 1960 – 15 April 2022) was a Canadian environmental scientist and academic known for his contributions to Arctic science, in particular the study of Arctic sea ice processes. He held the Canada Research Chair in Arctic-System Science at the University of Manitoba. He was an officer of the Order of Canada and a fellow of the Royal Society of Canada. Biography Barber obtained his bachelor's (1981) and master's (1987) degrees from the University of Manitoba, and his Ph.D. (1992) in Arctic climatology from the University of Waterloo. He started his academic career teaching at the University of Manitoba in 1993. He received a Canada Research Chair in Arctic System Science in 2002 at the University of Manitoba. He was also Associate Dean (Research), as well as Director of the Centre for Earth Observation Science, in the Faculty of Environment, Earth, and Resources.Barber was married to Lucette. The couple had three children. Barber died on 15 April 2022 following complications from cardiac arrest. Research Barber's research focused on studying the effects of climate change on the arctic sea ice, and development of tools to study its harmful effects. His early work for Fisheries and Oceans Canada studied marine mammal habitat detection and change in the Arctic. He used technologies including geographic information systems, remote sensing, and mathematical modelling to study linkages between the atmosphere, ocean, and ice, and connecting the same to people and their habitats. He showed that some apparently frozen sea ice was in fact porous and fragile "rotten ice", and studied the effects of its presence on Arctic food chains.He led the development of many arctic research projects including the Canadian Arctic Shelf Exchange Study (CASES), Network of Centres of Excellence ArcticNet, and the Hudson Bay System Study (BaySys). At the University of Manitoba, he also led industry-academia outreach programs including one with Manitoba Hydro. Barber also contributed to arctic research infrastructure including CCGS Amundsen, a research vessel and icebreaker, and setup of the Churchill Marine Observatory. Honours Barber was made an Officer of the Order of Canada in 2016. The citation accompanying the award, called him one of Canada's most influential arctic researchers and also called out his role in expanding Canada's abilities toward detection and mitigation of transportation-related contaminant spills while contributing to policies and regulatory programs toward responding to the impact of climate change on the arctic ecosystem.Barber was elected as a fellow of the Royal Society of Canada in 2016, and was a member of the Royal Canadian Geographical Society. He was also a recipient of the Northern Science Award for his advancement of northern research. References External links David Barber on reduction of Arctic sea ice cover at TEDxUManitoba on YouTube Exploring the Arctic – David Barber – MSU Canadian Studies on YouTube David Barber on the Atlantification of the Arctic on YouTube
list of countries by greenhouse gas emissions
This is a list of sovereign states and territories by greenhouse gas emissions due to certain forms of human activity, based on the EDGAR database created by European Commission. The following table lists the 1970, 1990, 2005, 2017 and 2022 annual GHG emissions estimates (in kilotons of CO2 equivalent per year) along with a list of calculated emissions per capita (in metric tons of CO2 equivalent per year). The data include carbon dioxide, methane and nitrous oxide from all sources, including agriculture and land use change. They are measured in carbon dioxide-equivalents over a 100-year timescale. The Intergovernmental Panel on Climate Change (IPCC) 6th assessment report finds that the “Agriculture, Forestry and Other Land Use (AFOLU)” sector on average, accounted for 13-21% of global total anthropogenic GHG emissions in the period 2010-2019. Land use change drivers net AFOLU CO2 emission fluxes, with deforestation being responsible for 45% of total AFOLU emissions. In addition to being a net carbon sink and source of GHG emissions, land plays an important role in climate through albedo effects, evapotranspiration, and aerosol loading through emissions of volatile organic compounds. The IPCC report finds that the LULUCF sector offers significant near-term mitigation potential while providing food, wood and other renewable resources as well as biodiversity conservation. Mitigation measures in forests and other natural ecosystems provide the largest share of the LULUCF mitigation potential between 2020 and 2050. Among various LULUCF activities, reducing deforestation has the largest potential to reduce anthropogenic GHG emissions, followed by carbon sequestration in agriculture and ecosystem restoration including afforestation and reforestation. Land use change emissions can be negative.In 2022, GHG emissions from the top 10 countries with the highest emissions accounted for almost two thirds of the global total. Since 2006, China has been emitting more CO2 than any other country.However, the main disadvantage of measuring total national emissions is that it does not take population size into account. China has the largest CO2 and GHG emissions in the world, but also the largest population. For a fair comparison, emissions should be analyzed in terms of the amount of CO2 and GHG per capita.Considering GHG per capita emissions in 2022, China's levels (10.95) are almost 60 percent those of the United States (17.90) and less than a sixth of those of Qatar (67.38 - the country with the highest emissions of GHG per capita in 2022).China, the United States, India, the EU27, Russia and Brazil were the six world largest GHG emitters in 2022. Together they account for 50.1% of global population, 61.2% of global Gross Domestic Product (GDP), 63.4% of global fossil fuel consumption and 61.6% of global GHG emissions. Even in 2022, global GHG emissions primarily consisted of CO2, resulting from the combustion of fossil fuels (71.6%). CH4 contributed 21% to the total, while the remaining share of emissions comprised N2O (4.8%) and F-gases (2.6%).Measures of territorial-based emissions, also known as production-based emissions, do not account for emissions embedded in global trade, where emissions may be imported or exported in the form of traded goods, as it only reports emissions emitted within geographical boundaries. Accordingly, a proportion of the CO2 produced and reported in Asia and Africa is for the production of goods consumed in Europe and North America.According to the review of the scientific literature conducted by the Intergovernmental Panel on Climate Change (IPCC), carbon dioxide is the most important anthropogenic greenhouse gas by warming contribution. The European Union is at the forefront of international efforts to reduce greenhouse gas emissions and thus safeguard the planet's climate. Greenhouse gases (GHG) – primarily carbon dioxide but also others, including methane and chlorofluorocarbons – trap heat in the atmosphere, leading to global warming. Higher temperatures then act on the climate, with varying effects. For example, dry regions might become drier while, at the poles, the ice caps are melting, causing higher sea levels. In 2016, the global average temperature was already 1.1°C above pre-industrial levels. Per capita GHG emissions GHG emissions by country/territory The data in the following table is extracted from EDGAR - Emissions Database for Global Atmospheric Research. Notes References See also List of countries by carbon dioxide emissions per capita List of countries by carbon intensity of GDP List of countries by renewable electricity production List of countries by greenhouse gas emissions per person Top contributors to greenhouse gas emissions United Nations | Sustainable Development Goal 13 - Climate action External links UN Sustainable Development Knowledge Platform – The SDGs GHG data from UNFCCC – United Nations Framework Convention on Climate Change greenhouse gas (GHG) emissions data Total greenhouse gas emissions (kt of CO2 equivalent) – World Bank CO2 emissions in metric tons per capita – Google Public Data Explorer
greenhouse gas
Greenhouse gases are the gases in the atmosphere that raise the surface temperature of planets such as the Earth. What distinguishes them from other gases is that they absorb the wavelengths of radiation that a planet emits, resulting in the greenhouse effect. The Earth is warmed by sunlight, causing its surface to radiate heat, which is then mostly absorbed by water vapor (H2O), carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), and ozone (O3). Without greenhouse gases, the average temperature of Earth's surface would be about −18 °C (0 °F), rather than the present average of 15 °C (59 °F).Human activities since the beginning of the Industrial Revolution (around 1750) have increased atmospheric methane concentrations by over 150% and carbon dioxide by over 50%, up to a level not seen in over 3 million years. Carbon dioxide is causing about three quarters of global warming and can take thousands of years to be fully absorbed by the carbon cycle. Methane causes most of the remaining warming and lasts in the atmosphere for an average of 12 years.The vast majority of carbon dioxide emissions by humans come from the combustion of fossil fuels, principally coal, petroleum (including oil) and natural gas. Additional contributions come from cement manufacturing, fertilizer production, and changes in land use like deforestation. Methane emissions originate from agriculture, fossil fuel production, waste, and other sources.According to Berkeley Earth, average global surface temperature has risen by more than 1.2 °C (2.2 °F) since the since the pre-industrial (1850–1899) period as a result of greenhouse gas emissions. If current emission rates continue then temperatures will surpass 2.0 °C (3.6 °F) sometime between 2040 and 2070, which is the level the United Nations' Intergovernmental Panel on Climate Change (IPCC) says is "dangerous". Definition Greenhouse gases are infrared active gases that absorb and emit infrared radiation in the wavelength range emitted by Earth.: 2233  Carbon dioxide (0.04%), nitrous oxide, methane, and ozone are trace gases that account for almost 0.1% of Earth's atmosphere and have an appreciable greenhouse effect. A formal definition of greenhouses gases is as follows: "Gaseous constituents of the atmosphere, both natural and anthropogenic, that absorb and emit radiation at specific wavelengths within the spectrum of radiation emitted by the Earth’s surface, by the atmosphere itself, and by clouds. This property causes the greenhouse effect.": 2233  The radiation emitted by the Earth’s surface, the atmosphere and clouds is called thermal infrared or longwave radiation.: 2251 The most abundant greenhouse gases in Earth's atmosphere, listed in decreasing order of average global mole fraction, are: Water vapor (H2O) Carbon dioxide (CO2) Methane (CH4) Nitrous oxide (N2O) Ozone (O3) Chlorofluorocarbons (CFCs and HCFCs) Hydrofluorocarbons (HFCs) Perfluorocarbons (CF4, C2F6, etc.), SF6, and NF3Water vapor is a potent greenhouse gas but not one that humans are directly adding to. It is therefore not one of the drivers of climate change that the IPCC (Intergovernmental Panel on Climate Change) is concerned with, and therefore not included in the IPCC list of greenhouse gases. Changes in water vapor is a feedback that impacts climate sensitivity in complicated ways (because of clouds mostly). Infrared active gases Gases which can absorb and emit thermal infrared radiation, are said to be infrared active.Most gases whose molecules have two different atoms (such as carbon monoxide, CO), and all gasses with three or more atoms (including H2O and CO2), are infrared active and act as greenhouse gases. Technically, this is because an asymmetry in the molecule's electric charge distribution allows molecular vibrations to interact with electromagnetic radiation.Gasses with only one atom (such as argon, Ar) or with two identical atoms (such as nitrogen, N2, and oxygen, O2) are not infrared active. They are transparent to thermal radiation, and, for practical purposes, do not absorb or emit thermal radiation. This is because monatomic gases such as Ar do not have vibrational modes, and molecules containing two atoms of the same element such as N2 and O2 have no asymmetry in the distribution of their electrical charges when they vibrate. Hence they are almost totally unaffected by infrared thermal radiation. N2 and O2 are able to absorb and emit very small amounts of infrared thermal radiation as a result of collision-induced absorption. However, even taking relative abundances into account, this effect is small compared to the influences of Earth's major greenhouse gases.The major constituents of Earth's atmosphere, nitrogen (N2) (78%), oxygen (O2) (21%), and argon (Ar) (0.9%), are not infrared active and so are not greenhouse gases. These gases make up more than 99% of the dry atmosphere. Sources Natural sources Most greenhouse gases have both natural and human-caused sources. An exception are purely human-produced synthetic halocarbons which have no natural sources. During the pre-industrial Holocene, concentrations of existing gases were roughly constant, because the large natural sources and sinks roughly balanced. In the industrial era, human activities have added greenhouse gases to the atmosphere, mainly through the burning of fossil fuels and clearing of forests. Greenhouse gas emissions from human activities Water vapor Water vapor accounts for the largest percentage of the greenhouse effect, between 36% and 66% for clear sky conditions and between 66% and 85% when including clouds. Water vapor concentrations fluctuate regionally, but human activity does not directly affect water vapor concentrations except at local scales, such as near irrigated fields. Indirectly, human activity that increases global temperatures will increase water vapor concentrations, a process known as water vapor feedback. The atmospheric concentration of vapor is highly variable and depends largely on temperature, from less than 0.01% in extremely cold regions up to 3% by mass in saturated air at about 32 °C. (See Relative humidity#Other important facts.) The average residence time of a water molecule in the atmosphere is only about nine days, compared to years or centuries for other greenhouse gases such as CH4 and CO2. Water vapor responds to and amplifies effects of the other greenhouse gases. The Clausius–Clapeyron relation establishes that more water vapor will be present per unit volume at elevated temperatures. This and other basic principles indicate that warming associated with increased concentrations of the other greenhouse gases also will increase the concentration of water vapor (assuming that the relative humidity remains approximately constant; modeling and observational studies find that this is indeed so). Because water vapor is a greenhouse gas, this results in further warming and so is a "positive feedback" that amplifies the original warming. Current estimates (as of 2000) suggest that water vapor feedback has a "gain" coefficient of about 0.4; a gain coefficient must be 1 or greater to create an unstable feedback loop of the sort that could stimulate runaway warming. Thus, although water vapor feedback amplifies the impact of temperature changes caused by other factors, there is no indication that Earth is involved in a runaway greenhouse effect of the sort that could lead to Venus-like conditions. Role in heat transport and radiative forcing Effects on air and surface Absorption and emission of thermal radiation by greenhouse gases plays a role in heat transport in the air and at the surface: Atmospheric cooling: Greenhouse gases emit more thermal radiation than they absorb, and so have an overall cooling effect on air.: 139  Inhibition of radiative surface cooling: Greenhouse gases limit radiative heat flow away from the surface and within the lower atmosphere. Greenhouse gases exchange thermal radiation with the surface, reducing the overall rate of upward radiative heat transfer.: 139 Naming these effects contributes to a full understanding of the role of greenhouse gases. However, these effects are of secondary importance when it comes to understanding global warming. It is important to focus on top-of-atmosphere energy balance in order to correctly reason about global warming. It has been argued that the surface budget fallacy, in which focus on the surface energy budget leads to faulty reasoning, constitutes a common fallacy when thinking about the greenhouse effect and global warming.: 413 Effect at top-of-atmosphere (TOA) At the top of the atmosphere (TOA), absorbing and emission of thermal radiation by greenhouse gases leads to inhibition of radiative cooling to space, which means the amount of thermal radiation reaching space is reduced, relative to what is emitted by the surface. The change in TOA energy balance leads to the surface accumulating thermal energy and warming until TOA energy balance is achieved. Radiative forcing Radiative forcing is a metric that characterizes the impact of an external change in a factor that influences climate, e.g., a change in the concentration of greenhouse gases, or the effect of a volcanic eruption. The radiative forcing associated with a change is calculated as the change in the top-of-atmosphere (TOA) energy balance that would be caused by the external change, if one imagined that the change could be made without giving the troposphere or surface time to respond to reduce the imbalance. A positive forcing indicates more energy arriving than leaving.: 2245  The term radiative forcing has been used inconsistently in the scientific literature.Increasing the concentration of greenhouse gases is associated with a positive radiative forcing. Increasing the concentration of greenhouse gases tends to increase the TOA energy imbalance, leading to additional warming. The major non-gas contributor to Earth's greenhouse effect, clouds, also absorb and emit infrared radiation and thus have an effect on greenhouse gas radiative properties. Clouds are water droplets or ice crystals suspended in the atmosphere. Chemical process contributions to radiative forcing Some gases contribute indirectly to altering the TOA radiative balance through participation in chemical processes within the atmosphere.Oxidation of CO to CO2 directly produces an unambiguous increase in radiative forcing although the reason is subtle. The peak of the thermal IR emission from Earth's surface is very close to a strong vibrational absorption band of CO2 (wavelength 15 microns, or wavenumber 667 cm−1). On the other hand, the single CO vibrational band only absorbs IR at much shorter wavelengths (4.7 microns, or 2145 cm−1), where the emission of radiant energy from Earth's surface is at least a factor of ten lower. Oxidation of methane to CO2, which requires reactions with the OH radical, produces an instantaneous reduction in radiative absorption and emission since CO2 is a weaker greenhouse gas than methane. However, the oxidations of CO and CH4 are entwined since both consume OH radicals. In any case, the calculation of the total radiative effect includes both direct and indirect forcing.A second type of indirect effect happens when chemical reactions in the atmosphere involving these gases change the concentrations of greenhouse gases. For example, the destruction of non-methane volatile organic compounds (NMVOCs) in the atmosphere can produce ozone. The size of the indirect effect can depend strongly on where and when the gas is emitted.Methane has indirect effects in addition to forming CO2. The main chemical that reacts with methane in the atmosphere is the hydroxyl radical (OH), thus more methane means that the concentration of OH goes down. Effectively, methane increases its own atmospheric lifetime and therefore its overall radiative effect. The oxidation of methane can produce both ozone and water; and is a major source of water vapor in the normally dry stratosphere. CO and NMVOCs produce CO2 when they are oxidized. They remove OH from the atmosphere, and this leads to higher concentrations of methane. The surprising effect of this is that the global warming potential of CO is three times that of CO2. The same process that converts NMVOCs to carbon dioxide can also lead to the formation of tropospheric ozone. Halocarbons have an indirect effect because they destroy stratospheric ozone. Finally, hydrogen can lead to ozone production and CH4 increases as well as producing stratospheric water vapor. Role in greenhouse effect Contributions to the overall greenhouse effect The most important contributions to the total greenhouse effect are shown in the following table. Greenhouse gases not listed explictly above include sulfur hexafluoride, hydrofluorocarbons and perfluorocarbons (see IPCC list of greenhouse gases). It is not possible to state that a certain gas causes an exact percentage of the greenhouse effect. This is because some of the gases absorb and emit radiation at the same frequencies as others, so that the total greenhouse effect is not simply the sum of the influence of each gas. The higher ends of the ranges quoted are for each gas alone; the lower ends account for overlaps with the other gases. In addition, some gases, such as methane, are known to have large indirect effects that are still being quantified. Contributions to enhanced greenhouse effect Anthropogenic changes to the greenhouse effect are referred to as the enhanced greenhouse effect.: 2223 The contribution of each gas to the enhanced greenhouse effect is determined by the characteristics of that gas, its abundance, and any indirect effects it may cause. For example, the direct radiative effect of a mass of methane is about 84 times stronger than the same mass of carbon dioxide over a 20-year time frame but it is present in much smaller concentrations so that its total direct radiative effect has so far been smaller, in part due to its shorter atmospheric lifetime in the absence of additional carbon sequestration. On the other hand, in addition to its direct radiative impact, methane has a large, indirect radiative effect because it contributes to ozone formation. A publication from 2005 said that the contribution to climate change from methane was at least double previous estimates as a result of this effect. Radiative forcing and annual greenhouse gas index Earth absorbs some of the radiant energy received from the sun, reflects some of it as light and reflects or radiates the rest back to space as heat. A planet's surface temperature depends on this balance between incoming and outgoing energy. When Earth's energy balance is shifted, its surface becomes warmer or cooler, leading to a variety of changes in global climate.A number of natural and human-made mechanisms can affect the global energy balance and force changes in Earth's climate. Greenhouse gases are one such mechanism. Greenhouse gases absorb and emit some of the outgoing energy radiated from Earth's surface, causing that heat to be retained in the lower atmosphere. As explained above, some greenhouse gases remain in the atmosphere for decades or even centuries such as Nitrous oxide and Fluorinated gases, and therefore can affect Earth's energy balance over a long period. Radiative forcing quantifies (in Watts per square meter) the effect of factors that influence Earth's energy balance; including changes in the concentrations of greenhouse gases. Positive radiative forcing leads to warming by increasing the net incoming energy, whereas negative radiative forcing leads to cooling, as with anti-greenhouse effects causing gases like sulfur dioxide. The Annual Greenhouse Gas Index (AGGI) is defined by atmospheric scientists at NOAA as the ratio of total direct radiative forcing due to long-lived and well-mixed greenhouse gases for any year for which adequate global measurements exist, to that present in year 1990. These radiative forcing levels are relative to those present in year 1750 (i.e. prior to the start of the industrial era). 1990 is chosen because it is the baseline year for the Kyoto Protocol, and is the publication year of the first IPCC Scientific Assessment of Climate Change. As such, NOAA states that the AGGI "measures the commitment that (global) society has already made to living in a changing climate. It is based on the highest quality atmospheric observations from sites around the world. Its uncertainty is very low." Global warming potential The global warming potential (GWP) depends on both the efficiency of the molecule as a greenhouse gas and its atmospheric lifetime. GWP is measured relative to the same mass of CO2 and evaluated for a specific timescale. Thus, if a gas has a high (positive) radiative forcing but also a short lifetime, it will have a large GWP on a 20-year scale but a small one on a 100-year scale. Conversely, if a molecule has a longer atmospheric lifetime than CO2 its GWP will increase when the timescale is considered. Carbon dioxide is defined to have a GWP of 1 over all time periods. Methane has an atmospheric lifetime of 12 ± 2 years. The 2021 IPCC report lists the GWP as 83 over a time scale of 20 years, 30 over 100 years and 10 over 500 years. A 2014 analysis, however, states that although methane's initial impact is about 100 times greater than that of CO2, because of the shorter atmospheric lifetime, after six or seven decades, the impact of the two gases is about equal, and from then on methane's relative role continues to decline. The decrease in GWP at longer times is because methane decomposes to water and CO2 through chemical reactions in the atmosphere. Examples of the atmospheric lifetime and GWP relative to CO2 for several greenhouse gases are given in the following table: The use of CFC-12 (except some essential uses) has been phased out due to its ozone depleting properties. The phasing-out of less active HCFC-compounds will be completed in 2030. Concentrations in the atmosphere Factors affecting concentrations Atmospheric concentrations are determined by the balance between sources (emissions of the gas from human activities and natural systems) and sinks (the removal of the gas from the atmosphere by conversion to a different chemical compound or absorption by bodies of water). Airborne fraction The proportion of an emission remaining in the atmosphere after a specified time is the "airborne fraction" (AF). The annual airborne fraction is the ratio of the atmospheric increase in a given year to that year's total emissions. As of 2006 the annual airborne fraction for CO2 was about 0.45. The annual airborne fraction increased at a rate of 0.25 ± 0.21% per year over the period 1959–2006. Atmospheric lifetime Aside from water vapor, which has a residence time of about nine days, major greenhouse gases are well mixed and take many years to leave the atmosphere. Although it is not easy to know with precision how long it takes greenhouse gases to leave the atmosphere, there are estimates for the principal greenhouse gases. Jacob (1999) defines the lifetime τ {\displaystyle \tau } of an atmospheric species X in a one-box model as the average time that a molecule of X remains in the box. Mathematically τ {\displaystyle \tau } can be defined as the ratio of the mass m {\displaystyle m} (in kg) of X in the box to its removal rate, which is the sum of the flow of X out of the box ( F out {\displaystyle F_{\text{out}}} ), chemical loss of X ( L {\displaystyle L} ), and deposition of X ( D {\displaystyle D} ) (all in kg/s): τ = m F out + L + D {\displaystyle \tau ={\frac {m}{F_{\text{out}}+L+D}}} .If input of this gas into the box ceased, then after time τ {\displaystyle \tau } , its concentration would decrease by about 63%. The atmospheric lifetime of a species therefore measures the time required to restore equilibrium following a sudden increase or decrease in its concentration in the atmosphere. Individual atoms or molecules may be lost or deposited to sinks such as the soil, the oceans and other waters, or vegetation and other biological systems, reducing the excess to background concentrations. The average time taken to achieve this is the mean lifetime. Carbon dioxide has a variable atmospheric lifetime, and cannot be specified precisely.: 2237  Similar issues apply to other greenhouse gases, many of which have longer mean lifetimes than CO2, e.g. N2O has a mean atmospheric lifetime of 121 years. Current concentrations Abbreviations used in the two tables below: ppm = parts-per-million; ppb = parts-per-billion; ppt = parts-per-trillion; W/m2 = watts per square meter Changes since the Industrial Revolution Since the beginning of the Industrial Revolution, the concentrations of many of the greenhouse gases have increased. For example, the mole fraction of carbon dioxide has increased from 280 ppm to 421 ppm, or 140 ppm over modern pre-industrial levels. The first 30 ppm increase took place in about 200 years, from the start of the Industrial Revolution to 1958; however the next 90 ppm increase took place within 56 years, from 1958 to 2014.Recent data also shows that the concentration is increasing at a higher rate. In the 1960s, the average annual increase was only 37% of what it was in 2000 through 2007. Many observations are available online in a variety of Atmospheric Chemistry Observational Databases. Measurements from ice cores over the past 800,000 years Ice cores provide evidence for greenhouse gas concentration variations over the past 800,000 years (see the following section). Both CO2 and CH4 vary between glacial and interglacial phases, and concentrations of these gases correlate strongly with temperature. Direct data does not exist for periods earlier than those represented in the ice core record, a record that indicates CO2 mole fractions stayed within a range of 180 ppm to 280 ppm throughout the last 800,000 years, until the increase of the last 250 years. However, various proxies and modeling suggests larger variations in past epochs; 500 million years ago CO2 levels were likely 10 times higher than now. Indeed, higher CO2 concentrations are thought to have prevailed throughout most of the Phanerozoic Eon, with concentrations four to six times current concentrations during the Mesozoic era, and ten to fifteen times current concentrations during the early Palaeozoic era until the middle of the Devonian period, about 400 Ma. The spread of land plants is thought to have reduced CO2 concentrations during the late Devonian, and plant activities as both sources and sinks of CO2 have since been important in providing stabilizing feedbacks. Earlier still, a 200-million year period of intermittent, widespread glaciation extending close to the equator (Snowball Earth) appears to have been ended suddenly, about 550 Ma, by a colossal volcanic outgassing that raised the CO2 concentration of the atmosphere abruptly to 12%, about 350 times modern levels, causing extreme greenhouse conditions and carbonate deposition as limestone at the rate of about 1 mm per day. This episode marked the close of the Precambrian Eon, and was succeeded by the generally warmer conditions of the Phanerozoic, during which multicellular animal and plant life evolved. No volcanic carbon dioxide emission of comparable scale has occurred since. In the modern era, emissions to the atmosphere from volcanoes are approximately 0.645 billion tons of CO2 per year, whereas humans contribute 29 billion tons of CO2 each year.Measurements from Antarctic ice cores show that before industrial emissions started atmospheric CO2 mole fractions were about 280 parts per million (ppm), and stayed between 260 and 280 during the preceding ten thousand years. Carbon dioxide mole fractions in the atmosphere have gone up by approximately 35 percent since the 1900s, rising from 280 parts per million by volume to 387 parts per million in 2009. One study using evidence from stomata of fossilized leaves suggests greater variability, with carbon dioxide mole fractions above 300 ppm during the period seven to ten thousand years ago, though others have argued that these findings more likely reflect calibration or contamination problems rather than actual CO2 variability. Because of the way air is trapped in ice (pores in the ice close off slowly to form bubbles deep within the firn) and the time period represented in each ice sample analyzed, these figures represent averages of atmospheric concentrations of up to a few centuries rather than annual or decadal levels. Removal from the atmosphere Natural processes Greenhouse gases can be removed from the atmosphere by various processes, as a consequence of: a physical change (condensation and precipitation remove water vapor from the atmosphere). a chemical reaction within the atmosphere. For example, methane is oxidized by reaction with naturally occurring hydroxyl radical (OH·) and converted to CO2 and water vapor (CO2 from the oxidation of methane is not included in the methane Global warming potential). Other chemical reactions include solution and solid phase chemistry occurring in atmospheric aerosols. a physical exchange between the atmosphere and the other components of the planet. An example is the mixing of atmospheric gases into the oceans. a chemical change at the interface between the atmosphere and the other components of the planet. This is the case for CO2, which is reduced by photosynthesis of plants, and which, after dissolving in the oceans, reacts to form carbonic acid and bicarbonate and carbonate ions (see ocean acidification). a photochemical change. Halocarbons are dissociated by UV light releasing Cl· and F· as free radicals in the stratosphere with harmful effects on ozone (halocarbons are generally too stable to disappear by chemical reaction in the atmosphere). Negative emissions A number of technologies remove greenhouse gases emissions from the atmosphere. Most widely analyzed are those that remove carbon dioxide from the atmosphere, either to geologic formations such as bio-energy with carbon capture and storage and carbon dioxide air capture, or to the soil as in the case with biochar. Many long-term climate scenario models require large-scale human-made negative emissions to avoid serious climate change. Negative emissions approaches are also being studied for atmospheric methane, called atmospheric methane removal. History of scientific research In the late 19th century, scientists experimentally discovered that N2 and O2 do not absorb infrared radiation (called, at that time, "dark radiation"), while water (both as true vapor and condensed in the form of microscopic droplets suspended in clouds) and CO2 and other poly-atomic gaseous molecules do absorb infrared radiation. In the early 20th century, researchers realized that greenhouse gases in the atmosphere made Earth's overall temperature higher than it would be without them. During the late 20th century, a scientific consensus evolved that increasing concentrations of greenhouse gases in the atmosphere cause a substantial rise in global temperatures and changes to other parts of the climate system, with consequences for the environment and for human health. Other planets Greenhouse gases exist in many atmospheres, creating greenhouse effects on Mars, Titan and particularly in the thick atmosphere of Venus. See also References Works cited Blasing, T.J. (February 2013), Current Greenhouse Gas Concentrations, doi:10.3334/CDIAC/atg.032, archived from the original on 16 July 2011, retrieved 30 October 2012 IPCC TAR WG1 (2001), Houghton, J.T.; Ding, Y.; Griggs, D.J.; Noguer, M.; van der Linden, P.J.; Dai, X.; Maskell, K.; Johnson, C.A. (eds.), Climate Change 2001: The Scientific Basis, Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 0-521-80767-0, archived from the original on 15 December 2019, retrieved 18 December 2019 (pb: 0-521-01495-6) IPCC (2021). Masson-Delmotte, V.; Zhai, P.; Pirani, A.; Connors, S. L.; et al. (eds.). Climate Change 2021: The Physical Science Basis (PDF). Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press (In Press). IPCC AR4 WG1 (2007), Solomon, S.; Qin, D.; Manning, M.; Chen, Z.; Marquis, M.; Averyt, K.B.; Tignor, M.; Miller, H.L. (eds.), Climate Change 2007: The Physical Science Basis – Contribution of Working Group I (WG1) to the Fourth Assessment Report (AR4) of the Intergovernmental Panel on Climate Change (IPCC), Cambridge University Press, ISBN 978-0521880091 (pb: ISBN 978-0521705967) Canadell, Josep G.; Monteiro, Pedro M.S. (2021). "Chapter 5: Global Carbon and other Biogeochemical Cycles and Feedbacks" (PDF). IPCC AR6 WG1 2021. Forster, Piers; Storelvmo, Trude (2021). "Chapter 7: The Earth's Energy Budget, Climate Feedbacks, and Climate Sensitivity" (PDF). IPCC AR6 WG1 2021. Rogner, H.-H.; Zhou, D.; Bradley, R.; Crabbé, P.; Edenhofer, O.; Hare, B.; Kuijpers, L.; Yamaguchi, M. (2007), B. Metz; O.R. Davidson; P.R. Bosch; R. Dave; L.A. Meyer (eds.), Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 978-0521880114, archived from the original on 21 January 2012, retrieved 14 January 2012 External links Media related to Greenhouse gases at Wikimedia Commons Carbon Dioxide Information Analysis Center (CDIAC), U.S. Department of Energy, retrieved 26 July 2020 Annual Greenhouse Gas Index (AGGI) from NOAA Atmospheric spectra of GHGs and other trace gases Archived 25 March 2013 at the Wayback Machine
greenhouse gas emissions by australia
Greenhouse gas emissions by Australia totalled 533 million tonnes CO2-equivalent based on greenhouse gas national inventory report data for 2019; representing per capita CO2e emissions of 21 tons, three times the global average. Coal was responsible for 30% of emissions. The national Greenhouse Gas Inventory estimates for the year to March 2021 were 494.2 million tonnes, which is 27.8 million tonnes, or 5.3%, lower than the previous year. It is 20.8% lower than in 2005 (the baseline year for the Paris Agreement). According to the government, the result reflects the decrease in transport emissions due to COVID-19 pandemic restrictions, reduced fugitive emissions, and reductions in emissions from electricity; however, there were increased greenhouse gas emissions from the land and agriculture sectors.Australia uses principally coal power for electricity, accounting for 66% of grid-connected electricity generation in 2020, but this is rapidly decreasing with a growing share of renewables making up the energy supply mix, and most existing coal-fired power station scheduled to cease operation between 2022 and 2048. Emissions by the country have started to fall and are expected to continue to fall in coming years as more renewable projects come online.Climate Action Tracker rates Australia's overall commitment to emissions reduction as "highly insufficient". Policies and action as well as the domestic target are both "insufficient", fair share target is "highly insufficient", and climate finance is "critically insufficient". This is because the Australian government has continued to invest in natural gas projects, refused to increase its 2030 domestic emissions target, and is not on track to meet its current target.Climate change in Australia is caused by greenhouse gas emissions, and the country is generally becoming hotter, and more prone to extreme heat, bushfires, droughts, floods and longer fire seasons because of climate change. Contribution Total contributions The Australian government calculates that Australia's net emissions (including Land use, land-use change, and forestry) for the 12-month period to September 2020 were 510.10 million tonnes CO2-equivalent. The sectoral contributions based on the IPCC Fifth Assessment Report metrics were as follows: electricity 170.36Mt, 33.4%; stationary energy (excluding electricity) 101.83Mt, 20.%; transport 89.83Mt, 17.6%; agriculture 72.04Mt, 14.1%; fugitive emissions 51.23Mt, 10.0%; industrial processes 30.29Mt, 5.9%; waste 13.28Mt, 2.6%, and LULUCF -18.76Mt, -3.7% (due to carbon sequestration).In 2017, the electricity sector emissions totaled 190 million tons, of which 20 million tons was for primary industry, 49 million tons for manufacturing (which might include aluminum smelting), 51 million tons Commercial, Construction and Transport, and 33 million tons Residential.The Australian National Greenhouse Gas Inventory (NGGI) indicated in 2006 that the energy sector accounts for 69 per cent of Australia's emissions, agriculture 16 per cent and LULUCF six per cent. Since 1990, however, emissions from the energy sector have increased 35 per cent (stationary energy up 43% and transport up 23%). By comparison, emissions from LULUCF have fallen by 73%. However, questions have been raised about the veracity of the estimates of emissions from the LULUCF sector because of discrepancies between the Australian Federal and Queensland Governments’ land clearing data. Data published by the Statewide Landcover and Trees Study (SLATS) in Queensland, for example, show that the total amount of land clearing in Queensland identified under SLATS between 1989/90 and 2000/01 is approximately 50 per cent higher than the amount estimated by the Australian Federal Government’s National Carbon Accounting System (NCAS) between 1990 and 2001. Cumulative historical contribution The World Resources Institute estimates that Australia was responsible for 1.1% of all CO2 emissions between 1850 and 2002. Consolidated historical data measures Australia's total fossil fuels and cement production emissions (excluding LULUCF) at 18.18 billion tons out of the world's 1.65 trillion tons, or 1.10%. However noting that Australia has significant negative emissions from LULUCF relative to other countries, it is likely the net cumulative contribution proportion of Australia is much lower. Projected contribution According to the no-mitigation scenario in the Garnaut Climate Change Review, Australia's share of world emissions, at 1.5% in 2005, declines to 1.1% by 2030, and to 1% by 2100. Responsibility According to the polluter pays principle, the polluter has ecological and financial responsibility for the climate change consequences. The climate change is caused cumulatively and today's emissions will have effect for decades forward. The CO2 emissions per capita was 15.22-15.37 tonnes in 2020, which made Australia the 11th largest [[CO2 emissions per capita]] just ahead of the United Arab Emirates and United States.(Citation 15 does not match new data) Emission sources Some of the reasons for Australia's high levels of emissions include: In 2020, 73.5% of electricity was generated from fossil fuels (66% of electricity was generated from coal, and 7.5% from gas). A warm climate results in high use of air conditioning. Agriculture, such as methane from sheep and cow belches. High levels of automobile and aeroplane use among the population. Continued deforestation. Production and export of carbon products Australian emissions are monitored on a production rather than a consumption basis. This means that the emissions from the manufacture of goods imported into and consumed within Australia, for example many motor vehicles, are allocated to the country of manufacture. Similarly, Australia produces aluminium for export, which emits carbon dioxide during refining. While the aluminium is mainly consumed overseas, the emissions of its production are allocated to Australia. Coal In 2018 Australia was the world's 2nd largest exporter of coal. Australia is the world's largest exporter of metallurgical coal accounting for 55% of the world's supply in 2019. LNG Australia became the world's largest exporter of liquefied natural gas in 2020. Mitigation (technology aspects) Mitigation of global warming involves taking actions to reduce greenhouse gas emissions and to enhance sinks aimed at reducing the extent of global warming. This is in distinction to adaptation to global warming, which involves taking action to minimize the effects of global warming. Scientific consensus on global warming, together with the precautionary principle and the fear of non-linear climate transitions, is leading to increased effort to develop new technologies and sciences and carefully manage others in an attempt to mitigate global warming. In order to make a significant change, coal from Australia needs to be replaced with alternatives.Carbon capture and storage in Australia has been put forward as a solution for production of clean hydrogen from natural gas. Following the introduction of government mandatory renewable energy targets, more opportunities have opened up for renewable energy technologies such as wind power, photovoltaics, and solar thermal technologies. Accelerating deployment of these technologies provides opportunities for mitigating greenhouse gases.A carbon price was introduced in 2012 by the government of Julia Gillard with the purpose of reducing Australia's carbon emissions. It required large businesses (defined as those with annual carbon dioxide equivalent emissions over 25,000 tonnes annually) to pay a price for emissions permits. The tax was scrapped by the Abbott government in 2014 in what was a widely criticized and highly publicized move. Coal Coal is the most polluting of all fossil fuels, and the single greatest threat to the climate. Every stage of coal use brings substantial environmental damage. Phasing out fossil fuel energy is one of the most important elements to climate change mitigation. Today coal supplies over one third of the Australia's energy. Brown coal is by far the most polluting, and is currently used in Victoria. In order to have significant effects on greenhouse gas emissions, there needs to be a replacement of coal with alternatives.Reduction in the mining, use and export of coal is favored by environmental groups such as Greenpeace. Almost all of the coal emissions were emitted by coal fired power stations. Coal was responsible for 30% (164 million tonnes) of Australia's greenhouse gas emissions, not counting methane and export coal, based on 2019 GHG inventory.Two forms of coal are mined in Australia, depending on the region: high quality black coal and lower quality brown coal. Black coal is mainly found in Queensland and New South Wales, and is used for both domestic power generation and for export overseas. It is normally mined underground before being transported to power stations, or export shipping terminals. Brown coal is mainly found in Victoria and South Australia, and is of lower quality due a higher ash and water content. Today there are three open cut brown coal mines in Victoria used for baseload power generation. Carbon capture and storage The Rudd-Gillard government stated support for research into carbon capture and storage CCS as a possible solution to rising greenhouse gas emissions. CCS is an integrated process, made up of three distinct parts: carbon capture, transport, and storage (including measurement, monitoring and verification). Capture technology aims to produce a concentrated stream of CO2 that can be compressed, transported, and stored. Transport of captured CO2 to storage locations is most likely to be via pipeline. Storage of the captured carbon is the final part of the process. The vast majority of CO2 storage is expected to occur in geological sites on land, or below the seabed. However, according to the Greenpeace False Hope Report, CCS cannot deliver in time to avoid a dangerous increase in world temperatures. The Report also states that CCS wastes energy, and uses between 10 and 40% of the energy produced by a power station. It also asserts that CCS is expensive, potentially doubling plant costs, and is very risky, as permanent storage cannot be guaranteed. Nuclear energy Australia has approximately 40% of the world's uranium deposits, and is the world's third largest producer of uranium. Life-cycle greenhouse-gas emissions from nuclear power are low. The only nuclear reactor in Australia is currently operated by ANSTO in the Sydney suburb of Lucas Heights. The main argument against building more is that the cost of electricity from new nuclear is more expensive than new solar power. Other perceived problems include that enriched uranium can also be used as a nuclear weapon, prompting security issues such as nuclear proliferation. Also, nuclear waste requires extensive waste management because it can remain radioactive for centuries. Renewable energy Renewable energy technologies currently contribute about 6.2% of Australia's total energy supply and 21.3% of Australia's electricity supply, with hydro-electricity the largest single contributor and wind power a close second. Initiatives are also being taken with ethanol fuel and geothermal energy exploration. Renewable energy targets Moving towards long-term mitigation policies is a requirement of government, and the Australian energy sectors remains a central area in national emissions. The International Energy Agency (IEA) reviewed the Australian energy sectors policies in 2018, the findings identified needed improvements to the country's emissions reduction targets, and further the energy sectors resilience. The IEA identified needed improvements in government leadership by establishing a well-defined long-term integrated energy policy and climate toolkit for policy development and deployment. The Australian Government has announced a mandatory renewable energy target (MRET) to ensure that renewable energy obtains a 20% share of electricity supply in Australia by 2020. To ensure this, the government has committed that the MRET will increase from 9,500 gigawatt-hours to 45,000 gigawatt-hours by 2020. After 2020 the proposed ETS and improved efficiencies from innovation and in manufacture are expected to allow the MRET to be phased out by 2030.Following the introduction of government Mandatory Renewable Energy Targets, more opportunities have opened up for renewable energies such as wind power, photovoltaics, and solar thermal technologies. The deployment of these technologies provides opportunities for mitigating greenhouse gases. Solar power By 2020 installed solar power by country was more in each of Italy, Japan and Germany than Australia despite their lower potentials. Wind power Wind farms are highly compatible with agricultural and pastoral land use. Bioenergy Bioenergy is energy produced from biomass. Biomass is material produced by photosynthesis, or is an organic by-product from a waste stream. Thus it can be seen as stored solar energy. In terms of reducing greenhouse gas emissions, biomass offers four different types of contribution: liquid and gaseous biofuels can substitute for oil in transportation; biomass can be used in place of many greenhouse intensive materials; biomass can be converted to biochar, an organic char coal that greatly enhances the ability of soil to sequester carbon.Sustainable energy expert Mark Diesendorf suggests that bioenergy could produce 39% of Australia's electricity generation. Solar heat and electricity Solar heat and electricity together have the potential for supplying all of Australia's energy, whilst using less than 0.1% of land. With suitable government policies, particularly at the state and local levels, solar hot water could cost-effectively provide the vast majority of hot water systems in Australia, and make considerable reductions in residential electricity consumption. Solar electricity's potential scale of application is huge and its prospects for further substantial cost reductions are excellent. Energy efficiency The most important energy saving options include improved thermal insulation and building design, super efficient electrical machines and drives, and a reduction in energy consumption by vehicles used for goods and passenger traffic. Industrialized countries such as Australia, which currently use energy in the least efficient way, can reduce their consumption drastically without the loss of either housing comfort or amenity. Increased energy efficiency of buildings had the support of the former leader of the federal opposition, Malcolm Turnbull. Energy storage Hydrogen may become an important export. Biochar Biochar has been promoted as a technique for mitigation of global warming. The former leader of the federal opposition, Malcolm Turnbull brought biochar into the political debate by announcing that burying agricultural waste was one of three under-invested areas that his mitigation strategy was committed to opening up.Publications and interest groups which track the fledgling Australian industry are divided over the suitability of biochar to the economy. Brian Toohey of The Australian Financial Review has said it is yet to be proven commercially viable. Friends of the Earth Australia, one of the larger environmental lobby groups, is fundamentally opposed to biochar, calling it "part of a series of false solutions to climate change" which will be "based on large-scale industrial plantations and will lead to the acquisition of large tracts of land, furthering the erosion of indigenous peoples' and community rights while not adequately addressing the climate crisis".Green Left Weekly has published several editorials supporting the development of a large-scale biochar industry. Reforestation Reforestation programs have adaptation and mitigation strategy overlap, and in 2014, the "20 million Trees Programme" was announced as a national strategy. The plan aimed to further native resilience against climatic changes by creating a self-sustaining tree-based ecosystem by planting 20 million native trees across Australia by 2020. The Programme falls under the authority of the Australian Government's National Landcare Programme. Increasing the coverage of flora range has the potential and capability to increase the habitability potential of areas threatened by climatic change and improve ecological communities that may be threatened or endangered.The Commonwealth government announced a plan in 2019 which would invest in Australia's forestry industry by planting 1 billion trees in nine forestry hubs throughout Australia by 2030. Land management and biodiversity programs have emissions reduction benefits to both agricultural and environmental. Advantages stem from the land's ability to adapt to climatic changes by helping to fight soil erosion and stabilize soil as well as providing shelter to native and agricultural animals. AUD 1 billion will be invested the National Landcare program between 2018 and 2019 and 2022–23. Mitigation (policy aspects) The economic impact of a 60% reduction of emissions by 2050 was modeled in 2006 in a study commissioned by the Australian Business Roundtable on Climate Change. The World Resources Institute identifies policy uncertainty and over-reliance on international markets as the top threats to Australia's GHG mitigation. Domestic After contributing to the development of, then signing but not ratifying the Kyoto Protocol, action to address climate change was coordinated through the Australian Greenhouse Office. The Australian Greenhouse Office released the National Greenhouse Strategy in 1998. The report recognized climate change was of global significance and that Australia had an international obligation to address the problem. In 2000 the Senate Environment, Communications, Information Technology and the Arts References Committee conducted an inquiry that produced The Heat is On: Australia's Greenhouse Future.One of Australia's first national attempt to reduce emissions was the voluntary-based initiative called the Greenhouse Challenge Program which began in 1995. A collection of measures which focused on reducing the environmental impacts of the energy sector were released by Prime Minister John Howard on 20 November 1997 in a policy statement called Safeguarding Our Future: Australia's Response to Climate Change. One measure was the establishment of the Australian Greenhouse Office, which was set up as the world's first dedicated greenhouse office in April 1998.Domestically, the Clean Energy Act 2011 addresses GHG with an emissions cap, carbon price, and subsidies. Emissions by the electric sector are addressed by Renewable Energy targets at multiple scales, Australian Renewable Energy Agency (ARENA), Clean Energy Finance Corporation (CEFC), carbon capture and storage flagships, and feed-in tariffs on solar panels. Emissions by the industrial sector are addressed by the Energy Efficiency Opportunities (EEO) program. Emissions by the building sector are addressed by building codes, minimum energy performance standards, Commercial Building Disclosure program, state energy-saving obligations, and the National Energy Saving Initiative. Emissions by the transportation sector are addressed by reduced fuel tax credits and vehicle emissions performance standards. Emissions by the agricultural sector are addressed by the Carbon Farming Initiative and state land-clearing laws. Emissions by the land use sector are addressed by the Clean Energy Future Package, which consists of the Carbon Farming Futures program, Diversity Fund, Regional Natural Resources Management Planning for Climate Change Fund, Indigenous Carbon Farming Fund, Carbon Pollution Reduction Scheme (CPRS), and Carbon Farming Skills program. State energy saving schemes vary by state, with the Energy Saving Scheme (ESS) in North South Wales, Residential Energy Efficiency Scheme (REES) in South Australia, Energy Saver Incentive Scheme (ESI) in Victoria, and Energy Efficiency Improvement Scheme (EEIS) in Australian Capital Territory. Carbon Trading and Emission Trading Scheme In June 2007, former Australian Prime Minister, John Howard, announced that Australia would adopt a Carbon Trading Scheme by 2012. The scheme was expected to be the same as the counterpart in United States and European Union using carbon credits, where businesses must purchase a license in order to generate pollution. The scheme received broad criticism from both the ALP and Greens. The ALP believed that the scheme was too weak as well as a bad political move by the government. The lack of clear target by the government for this scheme before the 2007 federal election produced a high degree of skepticism on the willingness of the government on mitigation of global warming in Australia. In March 2008, the newly elected Labor government of Prime Minister Kevin Rudd announced that the Carbon Pollution Reduction Scheme (a cap-and-trade emissions trading system) would be introduced in 2010, however this scheme was initially delayed by a year to mid-2011, and subsequently delayed further until 2013.In April 2010, Kevin Rudd announced the delay the CPRS until after the commitment period of the Kyoto Protocol, which ends in 2012. Reasons given were the lack of bipartisan support for the CPRS and slow international progress on climate action for the delay. The Federal Opposition strongly criticized the delay as did community and grassroots action groups such as GetUp. Prime Ministerial Task Group Carbon taxation Another method of mitigation of global warming considered by the Australian Government is a carbon tax. This method would involve imposing an additional tax on the use of fossil fuels to generate energy. Compared to the CPRS and CTS/ETS, a carbon tax would set the cost for all carbon emissions, while the cap itself would be left unattended, allowing free market movements. This tax would primarily be aimed to reduce the use of fossil fuels for energy generation, and also look to increase efficient energy use and increase demand for alternative energies.A carbon tax was introduced by the government of Julia Gillard on 1 July 2012. It requires businesses emitting over 25,000 tonnes of carbon dioxide equivalent emissions annually to purchase emissions permits, which initially cost A$23 for one tonne of CO2 equivalent. The tax was repealed by the Australian senate on 17 July 2014. The reason given for the repeal by Australia's 2014 prime minister Tony Abbot was that the tax cost jobs and increased energy prices. Opponents to the repeal say that there has been an increase in Australian pollution since the tax's repeal. Since the repeal there has been several calls to re-implement the tax from multiple public figures, including Woodside Petroleum CEO Peter Coleman. Pathways for climate change mitigation Greenpeace energy revolution Greenpeace calls for a complete energy revolution. There are some fundamental aspects to this revolution, aimed as changing the way that energy is produced, distributed and consumed. The five principles of this revolution are: implement renewable solutions, especially through decentralized energy systems; respect the natural limits of the environment; phase out dirty, unsustainable energy sources; create greater equity in the use of resources; decouple economic growth from the consumption of fossil fuels.Other goals of the energy revolution are: renewable energy: 40% of electricity provided by renewable sources by 2020; coal-fired power will be phased out entirely by 2030; using electricity for the transport system and cutting consumption of fossil fuels through efficiency.The energy revolution report also looks at policy suggestions for the Australian Government in regards to climate change. Policy suggestions of the report include: legislate a greenhouse gas reduction target of greater than 40% below 1990 levels by 2020; establish an emissions trading scheme that delivers a decrease of our emissions in line with legislated interim targets; legislate a national target for 40% of electricity to be generated by renewable energy sources by 2020; massively invest in the deployment of renewable energy and strongly regulate for energy efficiency measures; establish an immediate moratorium on new coal-fired power stations and extensions to existing coal-fired power stations, and phase out existing coal-fired power stations in Australia by 2030; set a target of 2% per year to reduce Australia's primary energy demand; ensure transitional arrangements for coal dependent communities that might be affected by the transition to a clean energy economy; redirect all public subsidies that encourage the use and production of fossil fuels towards implementing energy efficiency programs, deploying renewable energy and supporting the upgrading of public transport infrastructure; develop a highly trained “green” workforce through investment in training programs and apprenticeships. Climate Code Red: The case for a sustainability emergency Climate Code Red states that the key strategies for cutting greenhouse gas emissions to zero are resource efficiency backed up by the substitution of renewable energy for fossil fuel sources. The report sites ultra-efficient technologies and synergies, and wind power as ways in which to tackle the climate change problem within Australia. Climate Code Red also has an outline for a rapid transition to a safe-climate economy. This plan includes: having the building capacity to plan, coordinate and allocate resources for high priority infrastructure projects and to invest sufficiently in the means to make safe-climate producer and consumer goods; fostering research and innovation to produce, develop and scale up the necessary technologies, products and processes; national building and industry energy efficiency programmes, including mandatory and enforceable minimum standards for domestic and commercial buildings, and the allocation of public resources to help householders, especially those with limited financial capacity, to reduce energy use; the rapid construction of capacity across a range of renewable technologies at both a national and micro level to produce sufficient electricity to allow the closure of the fossil fuel-fired generating industry; the conversion and expansion of Australia's car industry to manufacture zero-emission vehicles for public and private transport; the renewal and electrification of national and regional train networks to provide the capacity to shift all long-distance freight from road and air to rail;Further information: High-speed rail in Australia providing safe-climate expertise, technologies, goods and services to less developed nations to support their transition to the post-carbon world; adjustment and reskilling programmes for workers, communities and industries affected by the impacts of global warming and by the transition to the new economy. Garnaut climate change review Green paper 2008 Climate Change Authority review The Australian Climate Change Authority made recommendations to the Commonwealth government in 2016 to develop a toolkit of policies to guide the country into the future, the focal point for the 'toolkit' is Australia's Paris Agreement obligations. In 2017, the Commonwealth government commissioned an effectiveness' assessment of emissions reductions policies to meet its Paris Agreement obligations by 2030. The results of the evaluation were to develop both adaptation and mitigation measure which would cover all sectors of the economy, under the Paris Agreement these measure fall under "ratchet mechanism". To meet the 2030 Paris Agreements 2 °C limit of global median temperature rise a five-year review and adjustment cycle will commence beginning in 2023. Solutions There are a number of ways to achieve the goals outlined above. This includes implementing clean, renewable solutions and decentralizing energy systems. Existing technologies are available to use energy effectively and ecologically, including the use of solar, wind, and other renewable technologies, which have experienced double digit market growth globally in the last decade.A large section of the scientific community believe that one of the real solutions to avoiding dangerous climate change lies in renewable energy and energy efficiency that can start protecting the climate today. Technically accessible renewable energy sources such as wind, wave, and solar, are capable of providing six times more energy than the world currently consumes. As coal is one of the highest emitters of greenhouse gases, closing coal power stations is one of the most powerful tools for carbon emission reduction.The city of Melbourne is working with the wider Australian government to make Melbourne carbon neutral by the year 2050. The name of the plan is Melbourne Together for 1.5°C. The plan includes ways for Melbourne to reduce the impact of waste, and models for how to reduce transport and building emissions to zero. This is a continuation off of a plan created in 2003 to have Melbourne carbon neutral by 2020, but this did not succeed. Federal Government action Howard government The Howard government was resistant to taking action to prevent global warming that would harm Australia's economy, a policy continued from the prior Keating government. In 1996 in the lead up to the Kyoto treaty this slow going attitude caused conflict with the US and EU who at that time were proposing legally binding emissions targets as part of Kyoto. Australia was unwilling to accept stricter timeframes and emissions reductions targets, such as the 20% cut (from 1990 to 2005) proposed by smaller pacific island states, because of its carbon-intensive economy. Increasingly, in the lead up to the Kyoto conference, the Howard government became internationally isolated on its climate change policy. With Australia's opposition to binding targets "figur[ing] prominently in the prime minister's [recent] discussions in Washington and London" as highlighted in a Cabinet memo. In 1997 the Cabinet agreed to establish a climate change taskforce to strengthen its Kyoto bargaining position. In 1998 the Australian Government, under Prime Minister John Howard, established the Australian Greenhouse Office, which was then the world's first government agency dedicated to cutting greenhouse gas emissions, And, also in 1998, Australia signed but did not ratify the Kyoto protocols.The Australian Greenhouse Office put forward proposals for emissions reductions in 2000 (rejected in cabinet), 2003 (vetoed by Howard) and 2006 which was accepted by Howard and became the basis for his pre election emissions trading scheme proposal.Rudd government In 2007, after the first Rudd government was sworn in, the new Department of Climate Change was established under the Prime Minister and Cabinet portfolio and entrusted with coordinating and leading climate policy. The Kyoto protocol was ratified nine days after. The 2009 budget committed the government to a 25% reduction by 2020 on 2000 levels if "the world agrees to an ambitious global deal to stabilise levels of CO2 equivalent at 450 parts per million or lower by mid-century".On 1 December 2009, Malcolm Turnbull the then opposition leader was unseated by Tony Abbot, voiding a speculated deal on an emissions trading scheme between the opposition and the government. This happened a day before the second rejection of the Carbon Pollution Reduction Scheme bill by the Senate on 2 December 2009. On 2 February 2010, the Emissions Trading Scheme legislation was introduced for the third time, it was voted down again and the Liberal party unveiled its own climate mitigation legislation, the Direct Action Plan.On 27 April 2010, the Prime Minister Kevin Rudd announced that the Government has decided to delay the implementation of the Carbon Pollution Reduction Scheme (CPRS) until the end of the first commitment period of the Kyoto Protocol (ending in 2012). The government cited the lack of bipartisan support for the CPRS and the withdrawal of support by the Greens, and slow international progress on climate action after the Copenhagen Summit, as the reasons for the decision. The delay of the implementation of the CPRS was strongly criticised by the Federal Opposition under Abbott and by community and grassroots action groups such as GetUp.Gillard (and second Rudd) government To reduce Australia's carbon emissions, the government of Julia Gillard introduced a carbon tax on 1 July 2012, which required large businesses, defined as those emitting over 25,000 tons of carbon dioxide equivalent annually, to purchase emissions permits. The Carbon Tax reduced Australia's carbon dioxide emissions, with coal generation down 11% since 2008–09.Abbot government The subsequent Australian Government, elected in 2013 under then Prime Minister Tony Abbott was criticised for being "in complete denial about climate change". Abbott became known for his anti-climate change positions as was evident in a number of policies adopted by his administration. In a global warming meeting held in the United Kingdom, he reportedly said that proponents of climate change are alarmists, underscoring a need for "evidence-based" policymaking. The Abbott government repealed the carbon tax on 17 July 2014 in a heavily criticised move. The renewable energy target (RET), launched in 2001, was also modified.Turnbull government However, under the government of Malcolm Turnbull, Australia attended the 2015 United Nations Climate Change Conference and adopted the Paris Agreement, which includes a review of emission reduction targets every 5 years from 2020.Australia's Clean Energy Target (CET) came under threat in October 2017 from former Prime Minister Tony Abbott. This could lead to the Australian Labor Party withdrawing support from the Turnbull government's new energy policy.Climate policy continues to be controversial. Following the repeal of the carbon price in the last parliament, the Emissions Reduction Fund (ERF) is now Australia's main mechanism to reduce greenhouse gas emissions. However, two-thirds of the ERF's allocated $2.5 billion funding has now been spent. The ERF, and other policies, will need further funding to achieve our climate targets.Morrison government Under the Morrison government, Australia experienced some criticism as it plans to use a carbon accounting loophole from the expiring Kyoto Protocol agreement to fulfill its (already modest) Paris commitments. According to Climate Analytics, Australia pledged in Paris to cut its emissions between 26% and 28% below 2005 levels by 2030 but it is currently on track for a 7% cut.The Coalition government repeatedly claimed in 2019 that it turned around Australia's greenhouse gas emissions that it inherited from the Labor government. Scott Morrison, Angus Taylor and other senior Coalition figures repeated this claim. The Coalition actually inherited a strong position from the Labor government which had enacted the carbon tax.There are suggestions that disinformation is spread about the cause of Australia bushfires.On 1 November 2019, Scott Morrison outlined in a speech of mining delegates at the Queensland Resources Council that he planned to legislate to outlaw climate boycotts. State government actions Per person emissions vary considerably by state. Victoria The state of Victoria, in particular, has been proactive in pursuing reductions in GHG through a range of initiatives. In 1989 it produced the first state climate change strategy, "The Greenhouse Challenge". Other states have also taken a more proactive stance than the federal government. One such initiative undertaken by the Victorian Government is the 2002 Greenhouse Challenge for Energy Policy package, which aims to reduce Victorian emissions through a mandated renewable energy target. Initially, it aimed to have a 10 per cent share of Victoria's energy consumption being produced by renewable technologies by 2010, with 1000 MW of wind power under construction by 2006. The government legislated to ensure that by 2016 electricity retailers in Victoria purchase 10 per cent of their energy from renewables. This was ultimately overtaken by the national Renewable Energy Target (RET). By providing a market incentive for the development of renewables, the government helps foster the development of the renewable energy sector. A Green Paper and White Paper on Climate Change was produced in 2010, including funding for a number of programs. A Climate Change Act was passed including targets for 50% reduction in emissions. A recent review of this Act has recommended further changes. The supreme court of Australia stopped a logging project in Victoria because it will be particularly destructive after the bushfires. The premier of Victoria Daniel Andrews announced that by 2030 logging in the state will be banned. South Australia Former Premier Mike Rann (2002–2011) was Australia's first Climate Change Minister and passed legislation committing South Australia to renewable energy and emissions reduction targets. Announced in March 2006, this was the first legislation passed anywhere in Australia committed to cutting emissions. By the end of 2011, 26% of South Australia's electricity generation derived from wind power, edging out coal-fired power for the first time. Although only 7.2% of Australia's population live in South Australia, in 2011, it had 54% of Australia's installed wind capacity. Following the introduction of solar feed-in tariff legislation South Australia also had the highest per-capita take-up of household rooftop photo-voltaic installations in Australia. In an educative program, the Rann government invested in installing rooftop solar arrays on the major public buildings including the Parliament, Museum, Adelaide Airport, Adelaide Showgrounds pavilion and public schools. About 31% of South Australia's total power is derived from renewables. In the five years to the end of 2011, South Australia experienced a 15% drop in emissions, despite strong employment and economic growth during this period.In 2010, the Solar Art Prize was created by Pip Fletcher, and has run annually since, inviting artists from South Australia to reflect subjects of climate change and environmentalism in their work. Some winning artists receive renewable energy service prizes which can be redeemed as solar panels, solar hot water or battery storage systems. See also Climate change in Australia Coal mining in Australia Environmental issues in Australia Impact of the COVID-19 pandemic on the environment Plug-in electric vehicles in Australia References External links Australia's Mandatory Renewable Energy Target (Australia's MRET) Garnaut Review (Garnaut Climate Change Review)
greenhouse gas emissions by the united states
The United States produced 5.2 billion metric tons of carbon dioxide equivalent greenhouse gas (GHG) emissions in 2020, the second largest in the world after greenhouse gas emissions by China and among the countries with the highest greenhouse gas emissions per person. In 2019 China is estimated to have emitted 27% of world GHG, followed by the United States with 11%, then India with 6.6%. In total the United States has emitted a quarter of world GHG, more than any other country. Annual emissions are over 15 tons per person and, amongst the top eight emitters, is the highest country by greenhouse gas emissions per person. However, the IEA estimates that the richest decile in the US emits over 55 tonnes of CO2 per capita each year. Because coal-fired power stations are gradually shutting down, in the 2010s emissions from electricity generation fell to second place behind transportation which is now the largest single source. In 2020, 27% of the GHG emissions of the United States were from transportation, 25% from electricity, 24% from industry, 13% from commercial and residential buildings and 11% from agriculture. In 2021, the electric power sector was the second largest source of U.S. greenhouse gas emissions, accounting for 25% of the U.S. total. These greenhouse gas emissions are contributing to climate change in the United States, as well as worldwide. Background Types of greenhouse gases Greenhouse gases are gases; including carbon dioxide, nitrous oxide, ozone, methane, fluorinated gases and others; that absorb and emit radiant energy in the atmosphere. Atmospheric concentrations of greenhouse gases have increased significantly since the Industrial Revolution, due to human activities. The main greenhouse gases are carbon dioxide, methane, nitrous oxide, and fluorinated gases. Human powered force and activity is known as anthropogenic activity, which is causing a lot of detrimental effects on the planet. Such effects include erratic weather patterns, droughts and heat waves, wildfires, ocean acidification, sea level rise, glacial melting, increased average global temperatures, extinction, and many more.Greenhouse gases have a range in how long they remain in the atmosphere. Regardless of where it was emitted from, emissions are roughly spread across the world and become mixed into a heterogeneous mixture. They are calculated in parts per million (ppm), parts per billion (ppb), and parts per trillion (ppt). In 2019, data states that there was 409.8 parts per million of carbon dioxide in the atmosphere. This strongly impacts the atmosphere in that it causes global warming, creating a thick blanket over the Earth's atmosphere. Sources of greenhouse gases Carbon dioxide enters the atmosphere through the mass burning of fossil fuels such as coal, natural gas, and oil along with trees, solid waste, and biological materials. In 2018, carbon dioxide was estimated to approximately be 81% of all USA greenhouse gases emitted in 2018. Natural sinks and reservoirs absorb carbon dioxide emissions through a process called the carbon cycle. Sinks and reservoirs can include the ocean, forests and vegetation, and the ground.Methane is mainly produced by livestock and agricultural practices. Methane was estimated to make up 10% of emitted greenhouse gases. From the decrease in non-agricultural GHG emissions during COVID-19, the percent of the USA's GHG emissions from livestock increased from 2.6% to about 5%, which is a smaller percentage than many other countries likely because the USA has more greenhouse gas emissions from vehicles, machines, and factories. Nitrous oxide is a greenhouse gas produced mainly by agriculture. Fluorinated gases are synthetically produced and used as substitutes for stratospheric ozone-depleting substances.Greenhouse gases are produced from a wide variety of human activities, though some of the greatest impacts come from burning fossil fuels, deforestation, agriculture and industrial manufacturing. In the United States, power generation was the largest source of emissions for many years, but in 2017, the transportation sector overtook it as the leading emissions source. As of that year, the breakdown was transportation at 29%, followed by electricity generation at 28% and industry at 22%.After carbon dioxide, the next most abundant compound is methane, though there have been methodological differences in how to measure its effects. According to a 2016 study, US methane emissions were underestimated by the EPA for at least a decade, by some 30 to 50 percent. Currently, the US government is working to reduce methane emissions in the agriculture, mining, landfill, and petroleum industries.Another area of concern is that of ozone-depleting substances such as chlorofluorocarbons (CFCs) and hydrofluorocarbons (HFCs), which are often potent greenhouse gases with serious global warming potential (GWP). However, significant progress has been made in reducing the usage of these gases as a result of the Montreal Protocol, the international treaty that took effect in 1989. Major emissions-creating events In February 2018, an explosion and blowout in a natural gas well in Belmont County, Ohio was detected by the Copernicus Sentinel-5P satellite's Tropospheric Monitoring Instrument. The well was owned by XTO Energy. About 30 homes were evacuated, and brine and produced water were discharged into streams flowing into the Ohio River. The blowout lasted 20 days, releasing more than 50,000 tons of methane into the atmosphere. The blowout leaked more methane than is discharged by most European nations in a year from their oil and gas industries. Reporting requirement Reporting of greenhouse gases was first implemented on a voluntary basis with the creation of a federal register of greenhouse gas emissions authorized under Section 1605(b) of the Energy Policy Act of 1992. This program provides a means for utilities, industries, and other entities to establish a public record of their emissions and the results of voluntary measures to reduce, avoid, or sequester GHG emission In 2009, the United States Environmental Protection Agency established a similar program mandating reporting for facilities that produce 25,000 or more metric tons of carbon dioxide per year. This has resulted in thousands of US companies monitoring and reporting their greenhouse gas emissions, covering about half of all GHG emissions in the United States.A separate inventory of fossil fuel CO2 emissions is provided by Project Vulcan, a NASA/DOE funded effort to quantify North American fossil fuel emissions over time. Mitigation Federal Policies The United States government has held shifting attitudes toward addressing greenhouse gas emissions. The George W. Bush administration opted not to sign the Kyoto Protocol, but the Obama administration entered the Paris Agreement. The Trump administration withdrew from the Paris Agreement while increasing the export of crude oil and gas, making the United States the largest producer. In 2021, the Biden administration committed to reducing emissions to half of 2005 levels by 2030. In 2022, President Biden signed the Inflation Reduction Act into law, which is estimated to provide around $375 billion over 10 years to fight climate change. As of 2022 the social cost of carbon is 51 dollars a tonne whereas academics say it should be more than 3 times higher. Cross-sectoral State and Local Climate and Energy Program Federal Energy Management Program Transportation The transportation sector accounted for nearly 29% of GHG emissions in the United States in 2019, with 58% of emissions coming from light-duty vehicles. As of 2021, states lack legislation for low emission zones. Programs to reduce greenhouse gas emissions from the transportation sector include: The Corporate Average Fuel Economy (CAFE) Program: Requires automobile manufacturers to meet average fuel economy standards for the light-duty vehicles, large passenger vans and SUVs sold in the United States. Fuel economy standards vary according to the size of the vehicle. SmartWay: Helps improve environmental outcomes for companies in the freight industry. Renewable Fuel Standard: Under the Energy Policy Act of 2005, United States Environmental Protection Agency is responsible for promulgating regulations to ensure that gasoline sold in the United States contains a specific volume of renewable fuel. FreedomCAR and Fuel Partnership and Vehicle Technologies Program: The program works jointly with DOE's hydrogen, fuel cell, and infrastructure R&D efforts and the efforts to develop improved technology for hybrid electric vehicles, which include components (such as batteries and electric motors). The U.S. government uses six "criteria pollutants" as indicators of air quality: ozone, carbon monoxide, sulfur dioxide, nitrogen oxides, particulate matter, and lead and does not include carbon dioxide and other greenhouse gases. Clean Cities: A network of local coalitions created by DOE in 1993 that works to support energy efficiency and clean fuel efforts in local transportation contexts. Congestion Mitigation and Air Quality Improvement (CMAQ) Program: Provides funds to states to improve air quality and congestion through the implementation of surface transportation projects (e.g., traffic flow and public transit improvements). Aviation industry regulation: Emissions from commercial and business jets make up 10% of U.S. transportation sector emissions and 3% of total national GHG emissions. In 2016, the EPA issued an "endangerment finding" that allowed the agency to regulate aircraft emissions, and the first proposed standards under that legal determination were issued in July 2020. Developing alternative energy sources: The Department of Energy's Bioenergy Technologies Office (BETO) supports research into biofuels as part of that agency's efforts to reduce transportation-related GHG emissions. Diesel Emissions Reduction Act (DERA) Program: Provides grants for diesel emissions reduction projects and technologies. Energy consumption, residential and commercial As of 2020, buildings in the United States consume roughly 40% of the country's total electricity and contribute a similar percentage of GHG emissions. EPA and DOE Clean Energy Programs – Energy Star Commercial Building Integration Residential Building Integration Weatherization Assistance Program State Energy Program Energy consumption, industrial Energy Star for industry Industrial Technologies Program (ITP) Energy supply The Coalbed Methane Outreach Program (CMOP) works to reduce methane released into the atmosphere as a result of coal mining by supporting recovery of naturally occurring coal mine gases and encouraging the production of coalbed methane energy, among other uses. Natural Gas STAR Program The government also supports alternative energy sources that do not rely on fossil fuels, including wind power, solar power, geothermal power, and biofuel. These clean energy sources can often be integrated into the electric grid in what are known as distributed generation systems. EPA Clean Energy Programs - Green Power Partnership EPA Clean Energy Programs - Combined Heat and Power Partnership Carbon capture and storage Research ProgramAdvanced Energy Systems Program CO2 Capture CO2 Storage Agriculture Environmental Quality Incentives Program Conservation Reserve Program Conservation Security Program AgSTAR Program Forestry Healthy Forests Initiative Forest Land Enhancement Program Waste management The Landfill Methane Outreach Program (LMOP) promotes the use of landfill gas, a naturally occurring byproduct of decaying landfill waste, as a sustainable energy source. Besides reducing emissions, landfill gas utilization has also been credited for reductions in air pollution, improvements to health and safety conditions, and economic benefits for local communities. In addition to reducing emissions from waste already in landfills, the EPA's WasteWise program works with businesses to encourage recycling and source reduction to keep waste out of landfills in the first place. Regional initiatives Western Climate Initiative The Regional Greenhouse Gas Initiative (RGGI), founded in 2007, is a state-level emissions capping and trading program by nine northeastern U.S. states: Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New York, Rhode Island, Vermont, and Virginia. It is a cap and trade program in which states "sell nearly all emission allowances through auctions and invest proceeds in energy efficiency, renewable energy and other consumer benefit programs." Western Governors Association Clean and Diversified Energy Initiative Powering the Plains Carbon Sequestration Regional Partnerships U.S. Mayors Climate Protection Agreement National Governors Association's (NGA) Securing a Clean Energy Future. State Policies California Vehicle Air Pollution (Senate Resolution 27): States and implies that California does not have to adhere to cutbacks in federal emissions standards, thereby allowing stricter California emissions standards than the federal government. This Senate Resolution stems from the previous administration's efforts to reverse environmental policies, and in this case, vehicle emissions standards. California's authority to set its own emissions standards is allowed through California's Clean Air Act preemption waiver granted to the state by the EPA in 2009. California's waiver applies to vehicles made in 2009 and later. The previous state standard included a goal for certain vehicles to reach an average 35 miles per gallon. California saw a large decline in vehicle emissions from 2007 to 2013 but a rise in emissions following 2013, which can be attributed to different circumstances, some of which include population and employment growth, and increases in overall state GDP indicating more economic activity in the state. Cap-and-Trade Program: Market-based carbon pricing program that sets a statewide cap on emissions. This cap declines annually and applies to large emitters that account for over 80 percent of California's GHG emissions. The California Air Resources Board (CARB) creates an allowance for each ton of carbon dioxide emissions. The number of allowances decreases over time and incentivizes a flexible approach to emissions reduction through trading. Advanced Clean Cars: Addresses GHG emissions and criteria air pollutants in California through the Low-Emission Vehicle (LEV) regulation and the Zero-Emission Vehicle (ZEV) regulation. The LEV regulation establishes increasing emissions standards for passenger vehicles through model year 2025. The ZEV regulation requires vehicle manufacturers to sell a certain percentage of ZEVs and plug-in hybrids annually through 2025. The next iteration of this program for future model years is under development. 15 states have adopted the regulations under this program. Advanced Clean Cars II: Mandates a ban on the sale of internal combustion engine passenger vehicles, trucks, and SUVs starting in 2035, and mandates annual increases in ZEV sales targets from model year 2026 to 2035. California has adopted the regulation and New York announced that it would follow. Advanced Clean Trucks: Requires manufacturers of medium-and heavy-duty trucks to sell an increasing percentage of zero-emission trucks each year starting with model year 2024. In addition to California, Oregon, Washington, New Jersey, New York, and Massachusetts have also adopted this regulation. 10 other states and the District of Columbia intend to adopt in the future. Low Carbon Fuel Standard (LCFS): Establishes annual targets through 2030 to ensure transportation-related fuels become cleaner and less carbon intensive. Oregon has a similar program entitled, Clean Fuels Program, which runs until 2025. In 2006, the state of California passed AB-32 (Global Warming Solutions Act of 2006), which requires California to reduce greenhouse gas emissions. To implement AB-32, the California Air Resources Board proposed a carbon tax but this was not enacted. In May 2008, the Bay Area Air Quality Management District, which covers nine counties in the San Francisco Bay Area, passed a carbon tax on businesses of 4.4 cents per ton of CO2. Colorado In November 2006, voters in Boulder, Colorado, passed what is said to be the first municipal carbon tax. It covers electricity consumption with deductions for using electricity from renewable sources (primarily Xcel's WindSource program). The goal is to reduce their emissions by 7% below 1990 levels by 2012. Tax revenues are collected by Xcel Energy and are directed to the city's Office of Environmental Affairs to fund programs to reduce emissions.Boulder's Climate Action Plan (CAP) tax was expected to raise $1.6 million in 2010. The tax was increased to a maximum allowable rate by voters in 2009 to meet CAP goals. As of 2017 the tax was set at $0.0049 /kWh for residential users (avg. $21 per year), $0.0009/kWh for commercial (avg. $94 per year), and $0.0003 /kWh for industrial (avg. $9,600 per year). Tax revenues were expected to decrease over time as conservation and renewable energy expand. The tax was renewed by voters on 6 November 2012.As of 2015, the Boulder carbon tax was estimated to reduce carbon output by over 100,000 tons per year and provided $1.8 million in revenue. This revenue is invested in bike lanes, energy-efficient solutions, rebates, and community programs. The surcharge has been generally well received. Maryland In May 2010, Montgomery County, Maryland, passed the nation's first county-level carbon tax. The legislation required payments of $5 per ton of CO2 emitted from any stationary source emitting more than a million tons of carbon dioxide per year. The only source of emissions fitting the criteria is an 850 megawatt coal-fired power plant then owned by Mirant Corporation. The tax was expected to raise between $10 million and $15 million for the county, which faced a nearly $1 billion budget gap. The law directed half of tax revenues toward low interest loans for county residents to invest in residential energy efficiency. The county's energy supplier buys its energy at auction, requiring the plant owner to sell its energy at market value, preventing any increase in energy costs. In June 2010, Mirant sued the county to stop the tax. In June 2011 the Federal Court of Appeals ruled that the tax was a fee imposed "for regulatory or punitive purposes" rather than a tax, and therefore could be challenged in court. The County Council repealed the fee in July 2012. GHG reduction targets States with statutory GHG reduction targets: California, Colorado, Connecticut, Hawaii, Maryland, Maine, Minnesota, Massachusetts, New Jersey, New York, Nevada, Oregon, Rhode Island, Vermont, Virginia, and Washington. States that don't have statutory targets, but have statutory GHG reporting requirements: Iowa and Pennsylvania. Renewable portfolio standards 38 states have established renewable portfolio standards or voluntary targets, which increase the share of renewable electricity generation over time. Lead by example programs New Hampshire's Better Buildings Neighborhood Program New Jersey's Clean Energy Program Atlanta's Virginia Highland - 1st Carbon Neutral Zone in the United States Local programs Municipal, county, and regional governments have substantial influence on greenhouse gas emissions, and many have reduction goals and programs. Local governments are often one of the largest employers in their jurisdictions, and can achieve substantial reductions in their own operations, such as by using zero-emissions vehicles, making government buildings energy-efficient, making or buying renewable energy, and providing incentives for employees to walk, bike, or take transit to work. Local governments have control over several policy areas which influence emissions for the population as a whole. These include land use regulations such as zoning; transportation infrastructure like public transit, parking, and bike lanes; and building codes and efficiency regulations. Some municipalities act as utility cooperatives and set a minimum standard for renewable generation. Non-governmental responses Individual action Actions taken by individuals on climate change include diet, travel alternatives, household energy use, reduced consumption and family size. Individuals can also engage in local and political advocacy around issues of climate change. Individuals have a variety of carbon offsetting options available to mitigate their environmental impact through non-profit organizations. Business community Numerous large businesses have started cutting emissions and committed to eliminate net emissions by various dates in the future, resulting in higher demand for renewable energy and lower demand for fossil fuel energy. Businesses may also go carbon neutral by enrolling in Carbonfree® Programs or certifying their products as Carbonfree® through carbon offset organizations. Technologies in development Carbon Sequestration Regional Partnerships Nuclear: Generation IV Nuclear Energy Systems Initiative Nuclear Hydrogen Initiative Advanced Fuel Cycle Initiative Global Nuclear Energy Partnership Clean Automotive Technology Hydrogen Technology High-temperature superconductivity See also Climate Registry Coal in the United States Energy conservation in the United States Greenhouse gas emissions in Kentucky List of U.S. states by carbon dioxide emissions Phase-out of fossil fuel vehicles Plug-in electric vehicles in the United States Politics of global warming Regulation of greenhouse gases under the Clean Air Act Select Committee on Energy Independence and Global Warming U.S. Climate Change Science Program List of coal-fired power stations in the United States List of natural gas-fired power stations in the United States References External links Inventory by Climate Trace Live carbon emissions from electricity generation in some states U.S. Emissions Data (Energy Information Administration).
greenhouse gas emissions by china
Greenhouse gas emissions by China are the largest of any country in the world both in production and consumption terms, and stem mainly from coal burning in China, including coal-fired power stations, coal mining, and blast furnaces producing iron and steel. When measuring production-based emissions, China emitted over 14 gigatonnes (Gt) CO2eq of greenhouse gases in 2019, 27% of the world total. When measuring in consumption-based terms, which adds emissions associated with imported goods and extracts those associated with exported goods, China accounts for 13 gigatonnes (Gt) or 25% of global emissions.Despite having the largest emissions in the world, China's large population means its per person emissions have remained considerably lower than those in the developed world. This corresponds to over 10.1 tonnes CO2eq emitted per person each year, slightly over the world average and the EU average but significantly lower than the second largest emitter of greenhouse gases, the United States, with its 17.6 tonnes per person. Accounting for historic emissions, OECD countries produced four times more CO2 in cumulative emissions than China, due to developed countries' early start in industrialization. Overall, China is a net importer of greenhouse emissions.The targets laid out in China's nationally determined contribution in 2016 will likely be met, but are not enough to properly combat global warming. China has committed to peak emissions by 2030 and net zero by 2060. In order to limit warming to 1.5 degrees C coal plants in China without carbon capture must be phased out by 2045. China continues to build coal-fired power stations in 2020 and promised to "phase down" coal use from 2026. According to various analysis, China is estimated to overachieve its renewable energy capacity and emission reduction goals early, but long-term plans are still required to combat the global climate change and meeting the Nationally Determined Contribution (NDC) targets. Greenhouse gas sources Since 2006, China has been the world's largest emitter of CO2 annually. According to estimates provided by the Netherlands Environmental Assessment Agency, China's carbon dioxide emissions in 2006 amounted to 6.2 billion tons, and the United States' co-production in the same year was 5.8 billion tons. In 2006, China's carbon dioxide emissions were 8 percent higher than America's, the agency said. The U.S. emitted 2% more carbon dioxide in 2005 than China. China ratified the Kyoto Protocol as a non-Annex B party without binding targets, and ratified the Paris Agreement to fight climate change. As the world's largest coal producer and consumer country, China worked hard to change energy structure and experienced a decrease in coal consumption since 2013 to 2016. However, China, the United States and India, the three biggest coal users, have increased coal mining in 2017. The Chinese government has implemented several policies to control coal consumption, and boosted the usage of natural gas and electricity. Looking ahead, the construction and manufacturing industries of China will give way to the service industry, and the Chinese government will not set a higher goal for economic growth in 2018; thus coal consumption may not experience continuous growth in the next few years.In 2019 China is estimated to have emitted 27% of world GhG, followed by the US with 11%, then India with 6.6%.China is implementing some policies to mitigate the bad effects of climate change, most of which aim to constrain coal consumption. The Nationally Determined Contribution (NDC) of China set goals and committed to peak CO2 emissions by 2030 in the latest, and increase the use of non-fossil fuel energy carriers, taking up 20% of the total primary energy supply. If China successfully reached NDC's targets, the GHG emissions level would be 12.8–14.3 GtCO2e in 2030, reducing 64% to 70% of emission intensity below 2005 levels. China has surpassed solar deployment and wind energy deployment targets for 2020. Energy production Power is estimated as the largest emitter, with 27% of greenhouse gases produced in 2020 generated by the power sector. Most electricity in China comes from coal, which accounted for 65% of the electricity generation mix in 2019. Electricity generation by renewables has been increasing, with the construction of wind and solar plants doubling from 2019 to 2020.According to a major 2020 study by Energy Foundation China, in order to limit warming to 1.5 degrees C, coal plants without carbon capture must be phased out by 2045.Transport was estimated in 2021 to be less than 10% of the country's emissions but growing.According to Natural Resources Defense Council, the Chinese power sector is estimated to hit the carbon emission peak around 2029. Energy consumption According to the 2016 Chinese Statistical Yearbook published by China's National Bureau of Statistics, China's energy consumption was 430,000 (10,000 tons of Standard Coal Equivalent), including 64% coal, 18.1% crude oil, 5.9% natural gas, 12.0% primary electricity, and other energy. Since 2011, the percentage of coal has decreased, and the percentage of crude oil, natural gas, primary electricity, and other energy have increased.China experienced an increase in electricity demand and usage in 2017 as the economy accelerated. According to the Climate Data Explorer published by World Resources Institute, China, the European Union, and the U.S. contributed to more than 50% of global greenhouse gas emissions. In 2016, China's greenhouse gas emissions accounted for 26% of total global emissions. The energy industry has been the biggest contributor to greenhouse gas emissions since the last decade.Although China has large countrywide emissions, its per capita carbon dioxide emissions are still lower than those of some other developed and developing countries. Industry Manufacturing industry is estimated at 19% of 2020 emissions. Cement Cement is estimated to be 15% of emissions but only a tenth of companies are reporting data as of 2021. Iron and steel Steel is estimated at 15% to 20% of emissions and consolidation of the industry may help. Agriculture Agriculture is estimated at 13% of 2020 GHG. Slightly over half of agricultural emissions are estimated to be nitrous oxide and almost all the rest methane. Waste Waste is estimated at 6% of 2020 emissions. Most municipal solid waste is sent to landfill. Coal mine methane China is by far the largest emitter of methane from coal mines. Mitigation A 2011 Lawrence Berkeley National Laboratory report predicted that Chinese CO2 emissions will peak around 2030. This because in many areas such as infrastructure, housing, commercial building, appliances per household, fertilizers, and cement production a maximum intensity will be reached and replacement will take the place of new demand. The 2030 emissions peak also became China's pledge at the Paris COP21 summit. Carbon emission intensity may decrease as policies become strengthened and more effectively implemented, including by more effective financial incentives, and as less carbon intensive energy supplies are deployed. In a "baseline" computer model CO2 emissions were predicted to peak in 2033; in an "Accelerated Improvement Scenario" they were predicted to peak in 2027. China also established 10 binding environmental targets in its Thirteenth Five-Year Plan (2016-2020). These include an aim to reduce carbon intensity by 18% by 2020, as well as a binding target for renewable energy at 15% of total energy, raised from under 12% in the Twelfth Five-Year Plan. According to BloombergNEF the levelized cost of electricity from new large-scale solar power has been below existing coal-fired power stations since 2021. Policy The climate change mitigation policy in China is an important environmental protection strategy since the 2010s after rapid domestic developments. Having emitted the second-largest amount of greenhouse gas and having the most people, China published a series of policies and laws to mitigate environmental impacts, such as reduce atmospheric pollution, transition from fossil fuels to renewable energy, and achieve carbon neutrality. In the past 20 years, China has published 4 legislative laws and 5 executive plans as outlines regarding the climate change issues. Government departments and local governments also developed their own special plans and implementation methods based on the national outline. In 2015 China joined the Paris Agreement to globally constrain the temperature rise and greenhouse gas emission. In 2020 China created the 14th FYP(Five Year Plan) Archived 2020-07-17 at the Wayback Machine. The key climate- and energy-related ideas in the FYP will be critical to China's energy transition and global efforts to tackle climate change.In April, 2021, the United States and China decided to cooperate and reduce the global climate change. Series of international and domestic mitigation and adaption strategies was published based on the Paris Agreement. Policy and Law Forest Law of the People's Republic of China (1998) The aim of this law was to conserve and rationally exploit forest resources. It accelerated territorial afforestation and cultivation while also ensuring forest product management, production, and supply in order to meet socialist construction requirements. Energy Conservation Law (2007) The aim of this law was to strengthen energy conservation, especially for key energy-using institutions, as well as to encourage energy efficiency and energy-saving technology. The legislation allowed the government to promote and facilitate the use of renewable energy in a variety of applications. Renewable Energy Act (2009) This Act outlines the responsibilities of the government, businesses, and other users in the production and use of renewable energy. It includes policies and targets relating to mandatory grid connectivity, market control legislation, differentiated pricing, special funds, and tax reliefs, as well as a target of 15 percent renewable energy by 2020. 12th Five-Year Plan (2011-2015) The 12th Five-Year Plan sought to make domestic consumption and development more economically equitable and environmentally friendly. It also shifted the economy's focus away from heavy industry and resource-intensive manufacturing and into a more consumer-driven, resource-efficient economy. The National Strategy for Climate Change Adaptation (2013) The strategy established clear guidelines and principles for adapting to and mitigating climate change. It includes interventions such as early-warning identification and information-sharing systems at the national and regional levels, an ocean disaster monitoring system, and coastal restoration to protect water supplies, reduce soil erosion, and improve disaster prevention. National Plan For Tackling Climate Change (2014-2020) The National Plan For Tackling Climate Change is a national law that includes prevention, adaptation, scientific study, and public awareness. By 2020, China plans to reduce carbon emissions per unit of GDP by 40-45 percent compared to 2005 levels, raise the share of non-fossil fuels in primary energy consumption to 15%, and increase forest area and stock volume by 40 million hectares and 1.3 million m3, respectively, compared to 2005 levels. Energy Development Strategy Action Plan (2014-2020) This plan aimed to reduce China's high energy consumption per unit of GDP through a series of steps and mandatory goals, encouraging more productive, self-sufficient, renewable, and creative energy production and consumption. Law on the Prevention and Control of Atmospheric Pollution (2016) The aim of this law is to preserve and improve the environment, prevent and regulate air pollution, protect public health, advance ecological civilization, and promote economic and social growth that is sustainable. It demands that robust emission control initiatives be implemented against the pollution caused by the burning of coal, industrial production, motor vehicles and vessels, dust as well as agricultural activities. 13th Five-Year Plan (2016-2020) The 13th Five Year Plan published the strategy and pathway for China's development during 2016-2020 and set specific environmental and productivity goals. Peak goals for carbon emissions, energy use, and water use were established in the 13th Five Year Plan. It also stated objectives for increasing industry productivity, removing obsolete or overcapacity production facilities, increasing renewable energy production, and improving green infrastructure. Emissions Trading The Chinese Ministry of Finance originally proposed a carbon tax in 2010, to come into effect in 2012 or 2013. The tax was never passed; in February 2021 the government instead set up a carbon trading scheme. Vehicles Vehicles account for around 8% of the heat-trapping gases released annually in China. As the Chinese vehicle stock rises and heavy manufacturing as a percentage of the overall economy declines, this percentage will rise in the coming years. Fuel performance regulations and funding for electric vehicles are two of the Chinese government's key policies on vehicle emissions. The government refers to vehicles fueled by non-petroleum fuels as "new energy vehicles," or "NEVs." Almost all NEVs in China today are battery-powered plug-in electric vehicles. Eco-Cities The Chinese government has strategically promoted Eco-Cities in China as a policy measure for addressing rising greenhouse gas emissions resulting from China's rapid urbanization and industrialization. These projects seek to blend green technologies and sustainable infrastructure to build large, environmentally friendly cities nationwide. The government has launched three programs to incentivize cities to undertake eco-city construction, encouraging hundreds of cities to announce plans for eco-city developments. Future Plans 14th Five Year Plan (2021-2025) The 14th FYP(Five Year Plan) was created in September, 2020 as the key climate- and energy-related plan that is critical to China's energy transition and global efforts to tackle climate change. On September 22, 2020, Chinese leader Xi Jinping stated: "China will increase its nationally determined contributions, adopt more powerful policies and measures, strive to reach the peak of carbon dioxide emissions by 2030, and strive to achieve carbon neutrality by 2060." Of particular interest is how China will integrate into the FYP to achieve peak carbon before 2030 and carbon neutrality by 2060. The plan will be seen by many as a test of how seriously the pledge is being taken at the policy level.Unlike most nations that have committed to carbon neutrality, China's economy grows rapidly. China is still a developing country and, as of 2020, growth was still linked to carbon emissions. In the 14th Five Year Plan, the Chinese government revealed climate mitigation goals including higher share of non-fossil fuels in the energy mix, reduction of CO2 emissions per unit of GDP, carbon cap for the power sector, reduction of fine particle pollution in key cities, and greater forest coverage. These goals cover industrial production, transportation, forestry, and citizens’ daily life aspects. U.S.-China Cooperation On April 15 and 16, 2021, U.S. Special Presidential Envoy for Climate John Kerry and China Special Envoy for Climate Change Xie Zhenhua met in Shanghai to discuss aspects of the climate crisis. U.S. and China Finally released joint statement and decided to cooperating with each other and with other countries to tackle the climate crisis. According to the joint statement on U.S. Department of State, "This includes both enhancing their respective actions and cooperating in multilateral processes, including the United Nations Framework Convention on Climate Change and the Paris Agreement." In short term, The United States and China take following actions to further address the climate crisis: Both countries intend to develop by COP 26 in Glasgow their respective long-term strategies aimed at net zero GHG emissions/carbon neutrality. Both countries intend to take appropriate actions to maximize international investment and finance in support of the transition from carbon-intensive fossil fuel based energy to green, low-carbon and renewable energy in developing countries. They will each implement the phasedown of hydrofluorocarbon production and consumption reflected in the Kigali Amendment to the Montreal Protocol.In addition, both countries will continue to discuss concrete actions in the 2020s to reduce emissions aimed at keeping the Paris Agreement-aligned temperature limit within reach. Potential areas include policies, measures, and technologies to decarbonize industry and power; Increased deployment of renewable energy; Green and climate resilient agriculture; Energy efficient buildings; Green, low-carbon transportation; Cooperation on addressing emissions of methane and other non-CO2 greenhouse gases; Cooperation on addressing emissions from international civil aviation and maritime activities; and other near-term policies and measures, including with respect to reducing emissions from coal, oil, and gas. Energy efficiency In 2004, Premier Wen Jiabao promised to use an "iron hand" to make China more energy efficient. Energy efficiency improvements have somewhat offset increases in energy output as China continues to develop. Since 2006, the Chinese government has increased export taxes on energy-inefficient industries, reduced import tariffs on certain non-renewable energy resources, and closed down a number of inefficient power and industrial plants. In 2009, for example, for every two new plants (in terms of energy generation capacity) built, one inefficient plant was closed. China is unique in its closing of so many inefficient plants. Renewable energy China is the world's leading investor in wind turbines and other renewable energy technologies and produces more wind turbines and solar panels each year than any other country.As the current largest greenhouse gas contributor in the world, coal burning is the major cause of global warming in China. Therefore, China has tried to transit from fossil fuels toward renewable energy since 2010. China is the world leader in renewable energy deployment, with more than twice the ability of any other nation. China accounted for 43% of global renewable energy capacity additions in 2018. For decades, hydropower has been a major source of energy in China. In the last ten years, wind and solar power have risen significantly. Renewables accounted for approximately a quarter of China's electricity generation in 2018, with 18% coming from hydropower, 5% from wind, and 3% from solar.Nuclear power is planned to be rapidly expanded. By mid-century fast neutron reactors are seen as the main nuclear power technology which allows much more efficient use of fuel resources.China has also dictated tough new energy standards for lighting and gas kilometrage for cars. China could push electric cars to curb its dependence on imported petroleum (oil) and foreign automobile technology. Co-benefits Like India, cutting greenhouse gas emissions together with air pollution in China, saves enough lives to easily cover the cost. Impact of 2019–20 coronavirus outbreak A temporary slowdown in manufacturing, construction, transportation, and overall economic activity during the beginning of the 2019–20 coronavirus outbreak reduced China's greenhouse gas emissions by "about a quarter," as reported in February 2020. Nonetheless, for the year April 1, 2020 – March 31, 2021, China's CO2 emissions reached a record high: nearly 12 billion metric tons. Additionally, China's carbon emissions during the first quarter of 2021 were higher than in the first quarters of both 2019 and 2020. Temporary reductions in carbon emissions due to lockdowns and initial economic relief efforts have limited long-term consequences, while the future direction of fiscal stimulus plays a more significant role in influencing long-term carbon emissions. Targets The targets laid out in China's Intended Nationally Determined Contribution (INDC) in 2016 will likely be met, but are not enough to properly combat global warming. China also established 10 binding environmental targets in its Thirteenth Five-Year Plan (2016-2020). These include an aim to reduce carbon intensity by 18% by 2020, as well as a binding target for renewable energy at 15% of total energy, raised from under 12% in the Twelfth Five-Year Plan. The Thirteenth Five-Year Plan also set, for the first time, a cap on total energy use from all sources: no more than 5 billion tons of coal through 2020. See also Debate over China's economic responsibilities for climate change mitigation List of countries by greenhouse gas emissions References Sources "Nationally Determined Contributions submitted to UN". www4.unfccc.int. Retrieved 2019-10-31. China's New Growth Pathway: From the 14th Five-Year Plan to Carbon Neutrality (PDF) (Report). Energy Foundation China. December 2020. Archived from the original (PDF) on 2021-04-16. Retrieved 2020-12-16. Friedlingstein, Pierre; Jones, Matthew W.; O'Sullivan, Michael; Andrew, Robbie M.; et al. (2019). "Global Carbon Budget 2019". Earth System Science Data. 11 (4): 1783–1838. Bibcode:2019ESSD...11.1783F. doi:10.5194/essd-11-1783-2019. ISSN 1866-3508. == External links ==
greenhouse gas emissions by turkey
Coal, cars and lorries vent more than a third of Turkey's five hundred million tonnes of annual greenhouse gas emissions—mostly carbon dioxide—and are part of the cause of climate change in Turkey. The nation's coal-fired power stations emit the most carbon dioxide, and other significant sources are road vehicles running on petrol or diesel. After coal and oil the third most polluting fuel is fossil gas; which is burnt in Turkey's gas-fired power stations, homes and workplaces. Much methane is belched by livestock; cows alone produce half of the greenhouse gas from agriculture in Turkey. Economists say that major reasons for Turkey's greenhouse gas emissions are subsidies for coal-fired power stations,: 18  and the lack of a price on carbon pollution.: 1  In January 2023 the National Energy Plan was published: it forecast that 1.7 GW more local coal power would be connected to the grid by 2030.: 15  Even without a carbon price renewable electricity in Turkey is cheaper than electricity generated by coal and gas, so the Chamber of Engineers says that without subsidies coal-fired power stations would be gradually shutdown. The Right to Clean Air Platform argues that there should be a legal limit on fine airborne dust, much of which comes from car and lorry exhaust. Low-emission zones in cities would both reduce local air pollution and carbon dioxide emissions. Turkey's share of global greenhouse gas emissions is about 1%, which is similar to its share of population. Annual per person emissions since the mid-2010s have varied around six and a half tonnes, which is more than the global average. Although greenhouse gas totals are reported some details, such as the split between cars and lorries, are not published. Turkey re-absorbs about a tenth of its emissions, mostly through its forests. The government supports reforestation, electric vehicle manufacturing and low-carbon electricity generation; and is aiming for net zero carbon emissions by 2053. But it has no plan for coal phase out, and its nationally determined contribution to the Paris Agreement on limiting climate change is an increase rather than a decrease. Unless Turkey's climate and energy policies are changed the 2053 net zero target will be missed and exporters of high carbon products, such as cement and electricity, will have to pay carbon tariffs. In 2023 there was misinformation about a draft climate law aiming to keep the tariff money within the country by starting carbon emission trading. Estimates ahead of official inventory Carbon dioxide (CO2) emissions from fossil fuels are by far the biggest part of greenhouse gas (GHG) emissions. Climate Trace use space-based measurements of carbon dioxide to quantify large emission sources, like major coal-fired power stations in Turkey. According to them forestry and land use was a net emitter in 2021, and even without that 615 million tonnes of GHG was emitted that year. Monitoring, reporting and verification Monitoring, reporting and verification (MRV) includes sharing information and lessons learned, which strengthens the trust of international climate finance donors. The Turkish government's Statistical Institute (Turkstat) follows the United Nations Framework Convention on Climate Change (UNFCCC) reporting guidelines, so uses production-based GHG accounting to compile the country's greenhouse gas inventory. As of 2015, using consumption-based accounting would give a total over 10% higher, as manufacturing the products imported to Turkey emits more GHG than manufacturing its exports. Turkstat sends the data and accompanying report to the UNFCCC each April, about 15 months after the end of the reported year. Emissions from fuels sold in the country for international aviation and shipping are accounted separately in reports to the UNFCCC, and are not included in a country's total.: 46  In 2021 jet kerosene, supplied at Turkish airports and burnt by international flights, emitted 8.39 Mt CO2e (carbon dioxide equivalent);: 60  and diesel oil and residual fuel oil from Turkish ports powering international shipping 1.89 Mt CO2e.: 62 The Intergovernmental Panel on Climate Change (IPCC) defines three methodological tiers to measure emissions. Tier 1 uses global defaults and simplified assumptions, so is the easiest but least accurate. Tier 2 uses country specific values and more detailed data. Tier 3 uses the most detailed data and modelling, so is the most difficult to compile but the most accurate. To make best use of human resources each nation may decide to only use higher tiers to estimate its particular "key categories". Turkstat selects these categories depending on either the absolute level of emissions from that category, or whether it is trending, or uncertain.: 439  For example, N2O from wastewater treatment and discharge was a key category for 2021 solely because of its quickly rising emissions.: 440  Nevertheless, most of the key categories selected in 2021 are the largest emitting sectors, cement production for example. Turkey uses Tier 2 and Tier 3 methodology for some key categories, for example a power plant might analyse the lignite it burns, which differs from mine to mine.: 72  Although road transport is a key category, it is not split between cars and lorries as is done in some countries. In 2021 the UNFCCC asked Turkey why it reported negligible indirect GHGs (carbon monoxide, nitrogen oxides, non-methane volatile organic compounds and sulfur oxides) in 2018.: 12 Greenhouse gas sources Turkey emitted 524 Mt of GHG in 2020, which is higher than would be sustainable under a global carbon budget. Per-person gross emissions are above the world average, at 6.3 t in 2020. Turkey's cumulative CO2 emissions are estimated at around 11 Gt, which is less than 1% of the world's cumulative total (Turkey's population is about 1% of world population). Turkey's emissions can be looked at from different perspectives to the standard IPCC classification: for example a 2021 study by Izmir University of Economics estimated that food, "from farm to fork", accounts for about a third of national emissions. This is similar to the global emissions share of food. Fossil fuels Burning coal in Turkey was the largest contributor to fossil-fuel emissions in 2021, followed by oil and natural gas.: 57  That year, Turkey's energy sector emitted over 70% of the country's GHG, mostly through electricity generation, followed by transport.: 43  In contrast agriculture contributed 13% of emissions and industrial processes and product use (IPPU) also 13%. Carbon capture and storage is not economically viable, since the country has no carbon pricing. The GHG emission intensity of energy consumption is higher than in the EU.From 2023 Turkey expects to greatly increase gas production. In 2021, IEA head Fatih Birol called for fossil-fuel producing countries to include limits on methane leaks in their climate pledges, for example the United States is doing this.Production of public heat and electricity emitted 148 megatonnes of CO2e in 2021,: table 1s1 cell B10  mainly through coal burning. In 2020, emission intensity was about 440 gCO2/kWh, around the average for G20 countries. Investment in wind and solar is hampered by subsidies for coal.: 10  Subsidised coal burnt by poor families gives off black carbon (a contributor to climate change) and other local air pollution. Residential fuel, such as natural gas and coal, contributed 50 Mt CO2e in 2021.: 135  Burning fossil fuels such as coal and natural gas to heat commercial and institutional buildings emitted 14 Mt CO2e in 2021.: 133  According to the Ministry of Energy and Natural Resources, "Our country aims to use our energy resources efficiently, effectively and in a way that has a minimum impact on the environment within the scope of the sustainable development objectives." Coal-fired power stations Turkey's coal-fired power stations are the largest source of greenhouse gas emissions by Turkey at 103 Mt (about 20% of national emissions – see pie chart) in 2021.: table 1.A(a)s1 cell G26 "solid fuels"  Over a kilogram of CO2 is emitted for every KWh of electricity generated in Turkey by coal-fired power stations.: 177  If operated at the targeted capacity factor, planned units at Afşin Elbistan would add over 60 Mt CO2 per year,: 319  more than one-tenth of the country's entire emissions.Almost all coal burnt in power stations is local lignite or imported bituminous (hard) coal. Coal analysis of Turkish lignite shows that it is high in ash and moisture, low in energy value and high in emission intensity. So Turkish lignite emits more CO2 than other countries' lignites. Although imported hard coal has a lower emission intensity when burnt, because it is transported much further its life-cycle greenhouse-gas emissions are similar to lignite.: 177  When carbon dioxide (CO2) from coal used by industry and buildings, and methane emissions from coal mining, are added to those from coal-fired electricity generation, over 30% of Turkey's annual emissions come from coal. In 2021, burning coal emitted 153 Mt CO2 in total.: 57  Methane leaks from coal mines in 2021 were equivalent to 6 Mt CO2.: 141  Eren Holding (via Eren Enerji's coal-fired ZETES power stations) emits over 2% of Turkey's GHG, and İÇDAŞ emits over 1% from its Bekirli coal-fired power stations. Emissions of black carbon are not published for individual power stations,: 2  as Turkey has not ratified the Gothenburg Protocol on air pollution. Gas-fired power stations Gas-fired power stations emitted 46 Mt CO2e in 2021.: table 1.A(a)s1 cell G27  It is difficult for them to compete with coal partly due to the lack of a carbon price. Electricity generation from gas tends to increase when hydropower is limited by droughts. Import costs for natural gas are expected to fall during the mid-2020s with the start of production from the Sakarya Gas Field in the Black Sea. Transport fuel Transport emitted 91 Mt of CO2e in 2021,: 104  a bit over one tonne per person. Road transport in the country dominated emissions with 86 Mt (including agricultural vehicles).: 105  Over three-quarters of Turkey's road-transport emissions come from diesel fuel.: 108  Average emissions of new cars in 2016 were about 120 g CO2/km: 17  Although the EU has a 2021 target of 95 g of CO2/km, Turkey has no target.: 17 In 2018, Turkey had no measures in place to reduce the well-to-wheel impact of petrol and diesel vehicles, except for a requirement for 3% ethanol in fuel (compared to 10% in the EU). However more use of biofuels may not be sustainable. Fuel quality and emissions standards for new cars are less strict than those in the EU;: 102  and in 2019 about 45% of cars were over 10 years old and energy-inefficient.: 16  The market share of electric vehicles was below world average in 2020.: 113  Domestic flights emitted 3 Mt of CO2e in 2021: 112  and their VAT rate was cut to 1%. Industry In 2021, Turkey's industrial sector emitted 75 Mt (13%) of GHG. But, as of 2019, estimates of the effects of government policy on industrial emissions are lacking.: 28  IEA head Fatih Birol has said that the country has a lot of potential for renewable energy. Some sugar factories, such as some owned by Türkşeker and Konya Seker, burn coal for the heat needed to make sugar and sometimes to generate electricity. Some industrial companies reach the Global Reporting Initiative GRI 305 emissions standard. Iron and steel The European steel industry has complained that steel imports from Turkey are unfair competition, because they are not subject to a carbon tax, and alleges that the natural gas used to produce some steel is subsidised. Turkish steel, primarily from minimills, averages about one tonne of CO2 per tonne of steel produced. Although this average is less polluting than China, three steelworks—Erdemir, İsdemir and Kardemir—use blast furnaces and thus emit more than those using electric arc furnaces. The future Carbon Border Adjustment Mechanism (CBAM) in the European Green Deal may include a carbon tariff on Turkish steel produced in blast furnaces, but the CBAM could help arc furnaces compete against products such as Chinese steel. Cement Turkey is the sixth-largest cement producer in the world and the largest in Europe. In 2020 Turkey exported 30 million tonnes, worth almost US$1 billion, and was the largest source of EU cement imports. Cement (clinker) production in 2021 emitted 44 Mt CO2, 8% of the country's total GHG.: 155  Climate Trace has estimated the contributions of individual factories, sometimes from kiln heat visible from satellites, and says that factories which emitted more than 1.5 Mt each in 2021 include: Körfez, Pazarcık, and Gönen cement plants.Turkey's construction sector contracted at the end of 2018 and so used less cement. Cement producers in the EU have to buy EU carbon credits, and say the CBAM is needed to protect them from unfair competition from Turkish companies as they pay no carbon price. The CBAM could be up to 50% on the cement price. Agriculture and fishing Agriculture accounted for 72 Mt which was 13% of Turkey's total 2021 GHG, including 61% of its methane emissions and 78% of its nitrous oxide emissions. These are due primarily to enteric fermentation, agricultural soils, and fertilizer management. Cattle emit almost half of the GHG from agriculture.: 240, 257  - (Total 72 Mt: 27 Mt enteric fermentation 61% of 9 Mt manure management = 32 Mt). About three quarters of red meat production in Turkey is beef. Turks eat an average of 15 kilograms (33 lb) of beef per person each year (which is more than the world average), and the country produced 1 million tonnes of beef in 2021. There are about 18 million cattle (including 8 million dairy cattle and a few buffalo), 45 million sheep and 11 million goats in the country: livestock are subsidized and some farmers are given "diesel payments". US$411 million worth of cattle were imported in 2020. VAT on meat and dairy is 1% like other "staple foods". Being ruminants sheep, goats and cattle belch methane. Fertilizers can emit the GHG nitrous oxide, but estimates of the effects of government policy on the agricultural and waste sectors' emissions are lacking.: 28  Production of plastic, such as for in agriculture, may release significant GHG in future. National GHG inventories do not yet include bottom trawling, as the IPCC has yet to issue accounting guidelines. Waste The waste sector contributed 15 Mt (3%) of Turkey's 2021 GHG, and landfilling is the most common waste-disposal method. Climate Trace estimate Odayeri (even though it has a biogas facility) on the European side of Istanbul to be the biggest waste single emitter at 3.8 Mt in 2021. Organic waste sent to landfills emits methane, but the country is working to improve sustainable waste and resource management. One third of organic waste is composted, but others argue for incineration. Mitigation Turkey's GHG emissions are not in line with the Paris agreement objective to limit temperature rise to well below 2 °C. According to Climate Action Tracker, if all government targets were like Turkey's, global warming would exceed 4 °C by the end of the 21st century. Climate Action Tracker predicts that after the country's COVID-19 pandemic emissions will resume their rise, and reach 40% to 70% above the 2020 level by 2030.The national strategy and action plan only partially covers short-term climate change mitigation.: 101  According to Climate Transparency, to take a fair share of limiting global heating to 1.5 °C, Turkey would need to reduce emissions to 365 Mt CO2e by 2030. But the United Nations Environment Programme (UNEP) says a faster reduction is needed, and emissions per person per year would need to be cut by more than half to about 2–2.5 t CO2e by 2030.: XXV  As of 2021, discussions continue at the Parliamentary Research Commission on Global Climate Change continues. The government intends to complete its review of long-term (2030 to 2050) policy,: 42  and publish a new National Climate Change Action Plan with sector specific targets and monitoring mechanisms by 2023. Turkey argues that as a developing country it should be exempt from net emission reduction targets, but other countries do not agree.: 59  Unless Turkey's energy policy is changed, European Union (EU) emissions per person are forecast to fall below Turkey's during the 2020s.: 22  Since the EU is Turkey's main trading partner, a comparison with targets in the European Green Deal is important to help Turkish businesses avoid future EU carbon tariffs on exports such as steel and cement. Public and private sector working groups are being formed to discuss the EU Green Deal, and the Trade Ministry has published an action plan in response to it and its CBAM. According to one 2020 study, Turkey joining the EU Emission Trading Scheme would be more economically beneficial for the power sector than a national carbon tariff. Path to net zero Turkey is aiming for net zero carbon emissions by 2053. But Climate Action Tracker said in 2021 that critical details on scope, target architecture and transparency are missing from the net zero goal. The World Bank has estimated the cost and benefits, but has suggested government do far more detailed planning.Turkey's Energy Efficiency Action Act, which came into force in 2018, commits nearly US$11 billion to efficiency and could significantly limit emissions. And the European Bank for Reconstruction and Development (EBRD) is investing in climate governance and energy efficiency, for example in smaller companies. In 2021 the government pledged to prepare a new plan for reducing emissions, but as of 2022 there is no plan to reduce coal use.Later in 2021 Istanbul Policy Center, a thinktank which is part of Sabancı University, released a summary of their own plan. The plan says that net zero by 2050 is possible and that the key to decarbonization is increasing the share of solar and wind in electricity generation. It says that CO2 emissions could be reduced by 32% from 2018 to 2030. And that the share of renewable resources other than hydroelectricity in installed power could be increased from 17 per cent in 2018 to 50 per cent in 2030 and 77 per cent in 2050. According to the plan, Turkey could increase the total wind/solar installed power to 35 GW by 2030 by constructing an average of 3 GW of solar and 2.5 GW of wind power plants every year. The plan says that gross CO2 emissions could be reduced to 132 million tons by 2050. Energy Emissions could be reduced considerably by switching from coal to existing gas-fired power stations: as there is enough generating capacity to allow the decommissioning of all coal-fired power stations and still meet peak energy demand, as long as hydropower as well as gas is used to meet peaks in demand. By the mid-2020s the gas price is forecast to have fallen considerably, as Turkey's production from the Black Sea will be more than enough to meet national demand. However, according to a 2021 study the electricity sector is financially unable to transform itself in response to the CBAM, and "to avoid market failure, the government must step in by designing a general decarbonization program for electricity production in Turkey". A solar panel factory began production in 2020; and solar and wind power are the cheapest generating technologies, but are underdeveloped. Fossil fuel subsidies risk carbon lock-in, but if they were scrapped wind and solar power could expand faster.: 7  Relying simply on battery storage would be insufficient to decarbonise electricity, as periods of high and low demand last for two to three weeks. Ramping down nuclear power in Turkey will be technically possible, at times when solar or wind increases or electricity demand drops, but would be expensive because of high fixed costs and lost sales revenue.: 72  However, after upgrading, repowering and adding a small amount of pumped-storage hydroelectricity, there are enough hydropower dams in Turkey to provide dispatchable generation to balance variable renewable energy, even allowing for more frequent droughts in Turkey in the future because of climate change.: 7  Solar farms are being co-located with hydropower to maintain generation in case of drought. Geothermal-electric capacity totalled 1.6 GW in 2020 and more is planned, but the lifetime CO2 emissions of some Turkish geothermal power is not yet clear. National and international investments in renewable energy and energy efficiency are being made; for example, the EBRD is supporting the installation of smart meters. Along with cement the electricity sector is forecast to be the hardest hit by the CBAM. According to thinktank Ember building new wind and solar power is cheaper than running existing coal plants which depend on imported coal. But they say that there are obstacles to building utility scale solar plants such as: lack of new capacity for solar power at transformers, a 50-MW cap for any single solar power plant's installed capacity, and large consumers being unable to sign long term power purchase agreements for new solar installations. Buildings Buildings are the largest energy consumers, and there are substantial opportunities for energy savings in both new build and renovations. A typical residential building emits almost 50 kgCO2eq/m2/year, mostly due to the energy used by residents. The Organisation for Economic Co-operation and Development (OECD) has said that more could be done to improve the energy efficiency of buildings, and that tax incentives offered for this would create jobs.: 62  Turkey was a co-leader of the group discussing zero-carbon buildings at the 2019 UN Climate Action Summit, and the city of Eskişehir has pledged to convert all existing buildings to zero emissions by 2050. Such energy efficiency improvements can be made in the same programme as increasing resilience to earthquakes in Turkey. However, in 2020 gas was subsidized.: 18  Increasing the proportion of passive houses has been suggested, as has adopting some EU building standards.In rural areas without a piped gas supply, heat pumps could be an alternative to wood, coal and bottled gas: but buying a heat pump is rare as it is very expensive for householders as there is no subsidy.: 29  However, owners of larger properties such as shopping centres, schools and government buildings have shown more interest.Direct geothermal heating (not to be confused with heat pumps) installed capacity totaled 3.5 GW thermal (GWt) in 2020, with the potential for 60 GWt, but it is unclear how much is low-carbon. According to a 2020 report commissioned by the environment ministry and the EBRD further research on Turkish geothermal is needed: specifically how to limit carbon dioxide venting to the atmosphere.: 283, 284 There is no data on the carbon intensity of cement.: 13  Emissions from cement production could be lessened by reducing its clinker content—for example, by making Limestone Calcined Clay Cement, which is only half clinker. The second-largest reduction could be made by switching half the fuel from hard coal and petroleum coke (petcoke) to a mixture of rubber from waste tires, refuse-derived fuel and biomass. Although the country has enough of these materials, most cement kilns (there are 54: 156 ) use coal, petcoke or lignite as their primary energy source.: 154  More cross-laminated timber could be used for building, instead of concrete.Further decarbonisation of cement production would depend heavily on carbon capture and storage (CCS).: 109  Despite Turkey's earthquake risk, CCS may be technically feasible in a salt dome near Lake Tuz or in Diyarbakır Province. Transport In the 2000s transport emission intensity improved, but this gain was partially lost in the 2010s due to the growing preference for sport utility vehicles. Although Turkey has several manufacturers of electric buses and many are exported, fewer than 100 were in use in the country in 2021. Ebikes are manufactured, but cities could be improved to make cycling in Turkey safer.Although Turkey's ferries (unlike some other countries') are still fossil-fuelled, the world's first all-electric tugboat began working in Istanbul's harbour in 2020, electric lorries are manufactured, and an electric excavator is planned for 2022. Eti Mine Works produces small quantities of lithium carbonate locally, and plans to increase production for use in batteries. A battery factory is planned by Aspilsan, which is part of Turkey's defence industry, and Ford Otosan started making electric vans in 2022. Over a quarter of a million charging stations are planned by 2030. Building codes are being changed to mandate electric car charging points in new shopping centres and car parks.Partially due to high import tariffs, very few electric cars are sold. Chinese EVs are subject to a 40% import tariff. In early 2023 less than 3% of cars sold were electric. Turkey's automotive industry makes electric cars locally, which have incentives. However the special consumption tax(Turkish) is 10% or more. As well as cutting GHG, creation of a domestic electric vehicle market by TOGG is hoped to reduce vehicle running costs, create jobs,: 76  and reduce oil imports. Introducing smart charging is important to avoid overloading Turkish electricity distribution networks.: 74 Petrol and diesel taxes are lower than in the neighbouring EU: 17  but higher than in oil-producing countries to the south. The legality of ridesharing companies is unclear, and taxis could be better integrated with public transport. However Istanbul taxi regulations are politically deadlocked. The central government has drafted enabling regulations for low-emission zones, and at least one municipality is considering creating one. According to Shura three-quarters of emissions in the transport sector come from road freight transport. Sales of fossil-fuelled road vehicles will be banned from 2040.Using International Civil Aviation Organization methodology Turkish Airlines offers carbon offsets certified to Verified Carbon Standard and Gold Standard. Turkey is participating in the Carbon Offsetting and Reduction Scheme for International Aviation. Industry As of 2022, hydrofluorocarbon smuggling from Turkey to the EU remains a problem. Electric motors in small and medium-sized enterprises are becoming more efficient. Low-carbon hydrogen could help with hard to decarbonise industries, such as cement and petrochemicals, but further research is needed. As of 2021 there are almost no supporters of the Task Force on Climate-Related Financial Disclosures, to provide information to investors about the risks of climate change to companies. The Turkish Industry and Business Association has asked the EU for funding to help strengthen alignment with the CBAM. Agriculture and fishing Climate-smart agriculture is being studied and financed, and agrivoltaics has been suggested as suitable for maize and some other shade-loving vegetables. President Erdoğan has called for more marine protected areas in international waters. There are no international waters adjacent to Turkey's territorial waters, of which about 4% is marine protected area. Carbon sinks Turkey has 23 million hectares of forest covering quarter of the country, though over 40% is degraded woodland. Turkey's forests are its main carbon sink and offset 34 Mt of the country's emissions in 2021.: 287  The government said in 2015 that by 2050 "forests are envisioned to stretch across over four-fifths of the country's territory". However warmer and drier air in the south and west may make it difficult to sustain the present forest cover. But, despite regional variations, forests are expected to remain an overall carbon sink. Almost all Turkey's forest land belongs to the state and cannot be privatised. Private afforestation permits have been issued however, to encourage tree planting in areas where tree density is low. Civil society organizations, such as the Turkish Foundation for Combating Soil Erosion and the Foresters' Association of Turkey, are also encouraging reforestation. In 2019, an annual "National Forestation Day" every 11 November was established by President Erdoğan. Junipers have been suggested for reforestation because of their hardiness, but are said to need help to regrow quickly. But, according to Ege University associate professor Serdar Gökhan Senol, the Ministry of Agriculture and Forestry sometimes replants when it should wait for regrowth instead.Three-quarters of Turkey's land is deficient in soil organic matter. This contains soil organic carbon, which is estimated to total 3.5 billion tonnes at 30 centimetres (12 in) soil depth, with 36 t/ha in agricultural fields. Soil organic carbon has been mapped: this is important because carbon emissions from soil are directly related to climate change, but vary according to soil interaction: 107  with low levels of soil organic carbon increasing the risk of soil erosion. Turkey is a major producer of marble; it has been suggested that waste from the industry could capture carbon by calcium looping. Economics During the late 20th and early 21st centuries, growth of the Turkish economy, and to a lesser extent population, caused increased emissions from electricity generation,: 10, 11  industry and construction,: 59–62  as described by the environmental Kuznets curve hypothesis. And from the 1990s to the 2010s they were correlated with electricity generation. But during the 2010s economic growth and the increase in emissions decoupled somewhat.: 59  Since the 1970s the energy intensity of economic growth has fluctuated around 1kWh per 2011 USD, whereas the carbon intensity of energy has fallen from 300g per kWh to 200g per kWh. In 2018, the government forecast that GHG emissions were expected to increase in parallel with GDP growth over the next decade.: 30  Once economic growth resumes after the debt crisis that began in 2018 and the country's COVID-19 recession, energy demand is also expected to grow. Nevertheless, Carbon Tracker says that it will be possible to decouple economic growth and emissions, by expanding the country's renewable-energy capacity and investing in energy efficiency with a sustainable energy policy.: 63 On average the consumption-based CO2 emissions of one of the richest 10% of people in Turkey is more than double that of someone in the rest of the population, as richer people tend to fly more and buy gasoline-fuelled SUVs. Nevertheless 2019 studies disagree on whether Turkey's high income inequality causes higher CO2 emissions.While the government has pledged to buy 30,000 locally made electric cars, there were few explicit green measures in the 2020 package designed to aid recovery from the country's COVID-19 recession. On the contrary the VAT rate for domestic aviation was cut, and oil and gas were discounted. Almost all the stimulus was detrimental to the environment; according to a 2021 report, only Russia's was less green. Turkey has received climate finance from the Global Environment Facility, the Clean Technology Fund, and various bilateral funding, but is not eligible for the Green Climate Fund because of its status as a developed country under the UNFCCC.: 43 Worldwide, marginal abatement cost studies show that improving the energy efficiency of buildings and replacing fossil fuelled power plants with renewables are usually the most cost-effective ways of reducing carbon emissions. A 2017 study concluded that a US$50/tonne carbon price (similar to the 2021 EU price) would reduce emissions by about 20%, mainly by discouraging coal. A more detailed 2020 study said that the electricity sector is key, and that low cost abatement is possible in the building sector. The same study said that low levels of abatement in agriculture would be cheap, but high levels very expensive. A 2021 study by Shura said that energy transition could increase national income by more than 1%, the largest part being wage increases due to higher skilled jobs,: 8  such as in wind and solar power.: 58  According to the study socioeconomic benefits, such as better health and wages, would be three times the financial cost.: 15 Turkey's carbon emissions are costly, even without carbon tariffs from other countries. The short-term health co-benefits of climate change mitigation have been estimated at $800 million for Turkey in the year 2028 alone.: 6  As of 2022 investment in green energy is far smaller than the country's potential. Academics have estimated that if Turkey and other countries invested in accordance with the Paris Agreement, Turkey would break even around 2060.: figure 4 Fossil fuel subsidies According to the OECD, fossil fuel subsidies in 2019 totalled over 25 billion lira (US$4.4 billion), nearly 1% of GDP.: 74  Economics professor Ebru Voyvoda has criticised growth policies based on the construction and real estate sectors, and said that moving from fossil fuels to electricity is important. According to a 2020 report by the International Institute for Sustainable Development: "Turkey also lacks transparency and continues to provide support for coal production and fossil fuel use, predominantly by foregoing tax revenue and providing state-owned enterprise investment." A MWh of electricity from Turkish lignite emits over a tonne of CO2. Some electricity from these power stations is purchased by the state electricity company at a guaranteed price of US$50–55/MWh until the end of 2027,: 176  despite coal power subsidies being economically irrational. Coal miners' wages are subsidised.: 178 The Petroleum Market Law provides incentives for investors to explore for oil and produce it.: 198  According to the OECD, in 2019 the fuel tax exemption for naphtha, petroleum coke and petroleum bitumen was a subsidy of 6.7 billion lira (US$1.2 billion), the largest of Turkey's fossil fuel subsidies that year. Petcoke is used in cement production. In other countries fossil fuel subsidies have been successfully scrapped by good communication from government, immediate cash transfers to poor people, energy price smoothing and energy transition support for households and firms. Carbon pricing Boğaziçi University has developed a decision-support tool and integrated assessment model for Turkey's energy and environmental policy. Over 400 (about 9%) of the world's voluntary carbon offset projects are in Turkey: mostly wind, hydro, and landfill methane projects. As elsewhere wildfires are a threat to forest carbon offsets. The main standards are the Gold Standard and the Verified Carbon Standard. Earlier academic assessment suggested a revenue-neutral carbon tax might be best for the Turkish economy, but carbon emission trading is more likely to be accepted politically, and technical work for a pilot emissions trading system (ETS) is ongoing.: 48  Without a carbon tax or emissions trading, the country is vulnerable to carbon tariffs imposed by the EU, the UK and other export partners. Turkey received by far the most EU climate-change financing in 2018: also the EBRD is investing in energy efficiency and renewable energy, and has offered to support an equitable transition from coal. Although there is no carbon price, other taxes in 2021 covered 39% of emissions: 10  and were equivalent to a carbon price of 22.50 euros.: 13 The International Monetary Fund says G20 countries should make their high-emitting companies pay a carbon price, which should rise to $75 per tonne of CO2 by 2030. The OECD recommends carbon pricing for all sectors, but road fuel is currently Turkey's only major carbon pricing.: 60  Taxes meet the social cost of road-transport carbon but not, however, the social cost of the country's air pollution. However, all other sectors have a large gap between the actual tax (€6 per tonne of CO2 in 2018) and the tax with this negative externality; thus emitters do not bear the actual cost of most GHG, violating the polluter pays principle. Annual fossil fuel import cost savings of approximately $17 billion by meeting Paris Agreement goals have been estimated.: 10  Turkish-American economist Daron Acemoğlu said in 2016 that carbon taxes alone do not generally act fast enough against dirty technologies, but that subsidising research into clean technologies is also necessary. Politics Article 56 of the Turkish Constitution states: Everyone has the right to live in a healthy and balanced environment. It is the duty of the State and citizens to improve the natural environment, to protect the environmental health and to prevent environmental pollution. A similar clause in the constitution of the US state of Montana has been used to declare laws that support fossil fuels unconstitutional.However, until production from large gas fields under the Black Sea begins in the mid-2020s, some in Turkey see burning local lignite as essential to lessen the high gas import bill. Likewise, until local production of solar panels and electric vehicles, and mining lithium for batteries all greatly increase, it is hard to avoid importing a lot of petroleum to make diesel and gasoline. 2000s The Justice and Development Party (AK Party), led by Recep Tayyip Erdoğan, was elected to government in 2003 and has been in power almost continuously since then. Turkey ratified the UNFCCC in 2004, but says it is unfair that it is included amongst the Annex I (developed) countries. When the treaty was signed in 1992 Turkey had much lower emissions per person, and no historical responsibility for greenhouse gas emissions. So, the Foreign Ministry argue that Turkey should have been grouped with non-Annex developing countries, which can receive climate finance from the Green Climate Fund. Turkey ratified the Kyoto Protocol in 2009. 2010s In a 2011 dispute over air pollution in Turkey, the main opposition Republican People's Party criticised the government for prioritising fossil fuels. The Climate Change and Air Management Coordination Board was created to coordinate government departments, and includes three business organisations. The Environment Ministry chairs it, though other ministries have considerable influence over climate change policy. The Energy Ministry has an Environment and Climate Department (responsible for the GHG inventory) and the Ministry of Treasury and Finance leads on climate financing.: 40 Turkey signed the Paris Agreement in 2016 but did not ratify it. In 2015 Turkey declared its intention to achieve "up to a 21% reduction in GHG emissions from the Business as Usual level by 2030". But because "Business as Usual" was assumed to be such a large increase, the "21% reduction" is an increase of over 7% per year to around double the 2020 level.In 2019, Ümit Şahin, who teaches climate change at Sabancı University, said that Turkey saw industrialised Western countries as solely responsible.: 24  While discussing their limited actions on climate change, Turkey and other countries cited the forthcoming 2020 United States withdrawal from the Paris Agreement (not knowing at that time that the US would rejoin early the following year) . Turkey was the 16th largest emitting country in 2019.During the 2019 UN Climate Action Summit on achieving carbon neutrality by 2050, Turkey co-led the coalition on the decarbonization of land transport. Energy Minister Fatih Dönmez said that Turkey planned to increase the share of renewables to two thirds of total electricity generation by 2023. Dönmez expressed Turkey's strong desire to add nuclear power to its energy mix, with Turkey's first nuclear power plant, expected to be partially operational by 2023. As of 2019, the government aimed to keep the share of coal in the energy portfolio at around the same level in the medium and long term. This was explained, in part, because of Turkey's desire to have a diverse mix of energy sources. Rather than increase imports of gas, it wanted to retain domestic coal, albeit with safeguards to reduce the impact on human health and the environment.: 20  İklim Haber (Climate News) and KONDA Research and Consultancy found in 2018 that public opinion on climate change prefers solar and wind power. 2020s Local politics and a just transition Although the transition to clean energy increases employment in Turkey as a whole, for example in wind and solar power: 6  and energy efficiency of buildings, lost jobs may be concentrated in certain locations and sectors.: 48  For example, closing Şırnak Silopi power station and the coal mines in Şırnak Province could increase already high unemployment there. A 2021 study estimated the mining sector would employ 21 thousand fewer people, 14% of total mining employment in 2018.: 57  The study also forecast job losses in textiles, agriculture and food processing, because such labour-intensive sectors would not be able to keep up with efficiency gains in other sectors.: 13  Because carbon pricing would be regressive economists say that poor people should be compensated.: 6  Policy for a just transition from carbon-intensive assets, such as coal, is lacking. Similarly, it is hard for livestock farmers to make a profit, so a sudden removal of subsidies would be an economic shock. But, unlike in neighbouring Greece, there have been no public debates about a just transition. According to former Economy Minister Kemal Derviş, many people will benefit from the green transition, but the losses will be concentrated on specific groups, making them more visible and politically disruptive.At the municipal level, Antalya, Bornova, Bursa, Çankaya, Eskişehir Tepebaşı, Gaziantep, İzmir, Kadıköy, Maltepe, Nilüfer and Seferihisar have sustainable energy and climate plans. A 2021 academic study of local climate change politics said that "local climate action planning takes place independent from the national efforts yet with a commitment to international agreements" and that better co-ordination between local and national government would help planning for climate change adaptation. Turkey ratified the Paris Agreement in 2021: according to Politico the country was persuaded by a 3.2 billion dollar loan from France and Germany for its energy transition, and Turkey's chief negotiator said the threat of the EU CBAM was a factor. National Politics Some suggest that limiting emissions through directives to the state-owned gas and electricity companies would be less effective than a carbon price, but would be more politically acceptable. Turkish citizens are taking individual and political action on climate change to the streets and online, including children demanding action and petitioning the UN.: 29  The Industrial Development Bank of Turkey says that it has implemented a sustainable business model, and sustainability-themed investments have a 74% share of the bank's loan portfolio. Turkey's Green Party is calling for an end to coal burning and the phasing out of all fossil-fuel use by 2050. Electricity generated from lignite is often described by politicians and the media as generated from "local resources" and added to the renewables percentage. TRT World calls natural gas "blue gold". After the 2020/21 droughts, the Nationalist Movement Party (the smaller party in the governing coalition) said that climate change is a national security issue. The threat of climate change had already been securitized by Environment Minister Murat Kurum back in 2019. Also following on from the droughts, all parties in parliament, including smaller opposition parties like the Peoples' Democratic Party and the Good Party, agreed to set up a Parliamentary Research Commission to combat climate change and drought. A draft climate law, including emissions trading, was considered in 2021 and a revised draft in 2023, but as of 2023 there is no emissions trading. In 2023 there was misinformation about this draft, the draft aims to keep the tariff money within the country by starting carbon emission trading.The national energy plan published in 2022 expected 1.7 GW more coal power to be built, but the opposition CHP had already said that no more fossil fuel power plants should be built and that there should be carbon trading. Businesses say the country needs to decarbonize so that money which would otherwise be lost to the CBAM remains in the country: NGOs and academics have such plans, however a February 2022 government-led "Climate Council" of all those groups and others issued over 200 recommendations, but not one for coal phase out. European Climate Action Network Turkey complained that civil society is not properly represented in decision making and in particular that there were no organizations such as theirs in the " Emission Reduction Commission" of the Climate Council. International politics Murat Kurum has said that global cooperation is key to tackling climate change, and US climate change envoy John Kerry has said that the top 20 emitting countries should reduce emissions immediately. Turkey and some other member countries say the Energy Charter Treaty should be changed to help with decarbonization, but because changes must be unanimous this is unlikely to happen. Turkish Petroleum Corporation (TPAO) is in discussions with private-sector companies about investment in Black Sea fossil gas. China funded Emba Hunutlu coal-fired power station started up in 2022. Ratification of the Kigali amendment to the Montreal Protocol, which limits emissions of fluorinated gases, has been approved by Parliament and is awaiting presidential approval. As of 2021 it has not been ratified, but there are some restrictions on selling these gases, and tightening of the 2018 regulation is being considered.The government says that, as a developing country having less than 1% responsibility for historical greenhouse gas emissions, Turkey's position under the UNFCCC and Paris Agreement is not fair at all. However some academics say that low historical greenhouse gas emissions can only be used as a fairness justification under international environmental law by least developed countries and small island developing states. They say that almost all G20 countries, including Turkey, should reduce their emissions below the 2010 level. Nevertheless, the same academics say that countries with higher historical emissions should reduce emissions more.The Turkish Industry and Business Association lobbied for ratification of the Paris Agreement. The non-ratification was used as an argument against approval of Woodhouse Colliery in the UK, as opponents said much of the coal would be exported to Turkey. In 2021 Turkey again asked to be removed from Annex 1 (developed countries) of the UNFCCC, "in order to make our fight against climate change more effective and to have access to climate finance". Some business people said that Turkey does not need more climate funding in order to meet its current commitments, so should ratify the Paris Agreement and stop building coal power in order to avoid the CBAM. Environmental lawyers became more active in the 2020s, but as of 2021, the European Court of Human Rights has not yet decided whether to hear the case of Duarte Agostinho and Others v. several countries including Turkey, brought by children and young adults. The Paris Agreement was ratified by parliament shortly before the 2021 United Nations Climate Change Conference.Hakan Mining and Generation Industry & Trade Inc. is constructing Gisagara peat-fired power station in Rwanda. In 2022 the country promised, in its updated first nationally determined contribution(NDC), to cut greenhouse gas emissions 41% compared to business-as-usual by 2030: however this means Turkey's carbon footprint could increase to about 700 Mt by 2030, with emissions peaking by 2038 or before. Because the government says BAU is 1175 Mt CO2eq, whereas climate activists say that the NDC should have promised an immediate actual reduction. Academics doubt that emissions could be reduced from a 2038 peak to zero by 2053, and say that delaying Turkey's energy transition is more expensive than starting it at once. The 2053 target was reportedly set without consulting the Energy Ministry, and as of 2023 that ministry has not published a decarbonization roadmap. Research and data access Sabancı University's Shura Energy Transition Center is researching decarbonization pathways. Linear regression, expert judgement and local integrated assessment modelling is used for non-energy projections.: 8 : 33  Emissions from industry have been modelled by the Energy Ministry and the Scientific and Technological Research Council of Turkey using TIMES-MACRO.: 33  On 2021 trends the OECD expects emissions to double from 2015 to 2030.: 59  A "Climate Change Platform" is planned to share studies and data.: 46 Although the OECD praised the government's monitoring, reporting and verification (MRV) system and said in 2021 that it covers half of total emissions,: 61  unlike the public sharing of data in the EU emission trading system, much detailed emissions data in Turkey is not public. Quantitative estimates of the impact of individual government policies on emissions have not been made or are not publicly available;: 20  neither are projections of long-term policy impacts.: 21  As of 2021, the most recent UNFCCC expert review noted that up-to-date GDP and population-growth forecasts have not been incorporated into models,: 8  and assumptions such as future energy intensity, energy demand and electricity consumption were unknown,: 8  and that no sensitivity analysis of GHG scenarios had been published.: 11  The UNFCCC has asked for more details needed to understand emission projections, such as assumptions about future tax levels, fuel prices, energy demand and intensity, income and household size;: 12  but Turkey has said that many such details are confidential.: 27–28  Although a number of issues raised in greenhouse gas inventory reviews have been resolved, dozens more have been outstanding for over three years.: 39  Space-based measurements of the signs of emissions has allowed public monitoring of the megacity of Istanbul and high emitting power plants since the early-2020s. Notes References External links UNFCCC Turkey documents Nationally Determined Contribution Turkey emissions at Climate Trace Live carbon emissions from electricity generation Methane map Fossil fuel registry Hydrogen strategy (Turkish)
life-cycle greenhouse gas emissions of energy sources
Greenhouse gas emissions are one of the environmental impacts of electricity generation. Measurement of life-cycle greenhouse gas emissions involves calculating the global warming potential of energy sources through life-cycle assessment. These are usually sources of only electrical energy but sometimes sources of heat are evaluated. The findings are presented in units of global warming potential per unit of electrical energy generated by that source. The scale uses the global warming potential unit, the carbon dioxide equivalent (CO2e), and the unit of electrical energy, the kilowatt hour (kWh). The goal of such assessments is to cover the full life of the source, from material and fuel mining through construction to operation and waste management. In 2014, the Intergovernmental Panel on Climate Change harmonized the carbon dioxide equivalent (CO2e) findings of the major electricity generating sources in use worldwide. This was done by analyzing the findings of hundreds of individual scientific papers assessing each energy source. Coal is by far the worst emitter, followed by natural gas, with solar, wind and nuclear all low-carbon. Hydropower, biomass, geothermal and ocean power may generally be low-carbon, but poor design or other factors could result in higher emissions from individual power stations. For all technologies, advances in efficiency, and therefore reductions in CO2e since the time of publication, have not been included. For example, the total life cycle emissions from wind power may have lessened since publication. Similarly, due to the time frame over which the studies were conducted, nuclear Generation II reactor's CO2e results are presented and not the global warming potential of Generation III reactors. Other limitations of the data include: a) missing life cycle phases, and, b) uncertainty as to where to define the cut-off point in the global warming potential of an energy source. The latter is important in assessing a combined electrical grid in the real world, rather than the established practice of simply assessing the energy source in isolation. Global warming potential of selected electricity sources 1 see also environmental impact of reservoirs#Greenhouse gases. List of acronyms: PC — pulverized coal CCS — carbon capture and storage IGCC — integrated gasification combined cycle SC — supercritical NGCC — natural gas combined cycle CSP — concentrated solar power PV — photovoltaic power Bioenergy with carbon capture and storage As of 2020 whether bioenergy with carbon capture and storage can be carbon neutral or carbon negative is being researched and is controversial. Studies after the 2014 IPCC report Individual studies show a wide range of estimates for fuel sources arising from the different methodologies used. Those on the low end tend to leave parts of the life cycle out of their analysis, while those on the high end often make unrealistic assumptions about the amount of energy used in some parts of the life cycle.Since the 2014 IPCC study some geothermal has been found to emit CO2 such as some geothermal power in Italy: further research is ongoing in the 2020s.Ocean energy technologies (tidal and wave) are relatively new, and few studies have been conducted on them. A major issue of the available studies is that they seem to underestimate the impacts of maintenance, which could be significant. An assessment of around 180 ocean technologies found that the GWP of ocean technologies varies between 15 and 105 gCO2eq/kWh, with an average of 53 gCO2eq/kWh. In a tentative preliminary study, published in 2020, the environmental impact of subsea tidal kite technologies the GWP varied between 15 and 37, with a median value of 23.8 gCO2eq/kWh), which is slightly higher than that reported in the 2014 IPCC GWP study mentioned earlier (5.6 to 28, with a mean value of 17 gCO2eq/kWh). In 2021 UNECE published a lifecycle analysis of environmental impact of electricity generation technologies, accounting for the following impacts: resource use (minerals, metals); land use; resource use (fossils); water use; particulate matter; photochemical ozone formation; ozone depletion; human toxicity (non-cancer); ionising radiation; human toxicity (cancer); eutrophication (terrestrial, marine, freshwater); ecotoxicity (freshwater); acidification; climate change, with the latter summarized in the table above.In June 2022, Électricité de France publishes a detailed Life-cycle assessment study, following the norm ISO 14040, showing the 2019 French nuclear infrastructure produces less than 4 gCO2eq/kWh. Cutoff points of calculations and estimates of how long plants last Because most emissions from wind, solar and nuclear are not during operation, if they are operated for longer and generate more electricity over their lifetime then emissions per unit energy will be less. Therefore, their lifetimes are relevant. Wind farms are estimated to last 30 years: after that the carbon emissions from repowering would need to be taken into account. Solar panels from the 2010s may have a similar lifetime: however how long 2020s solar panels (such as perovskite) will last is not yet known. Some nuclear plants can be used for 80 years, but others may have to be retired earlier for safety reasons. As of 2020 more than half the world's nuclear plants are expected to request license extensions, and there have been calls for these extensions to be better scrutinised under the Convention on Environmental Impact Assessment in a Transboundary Context.Some coal-fired power stations may operate for 50 years but others may be shut down after 20 years, or less. According to one 2019 study considering the time value of GHG emissions with techno-economic assessment considerably increases the life cycle emissions from carbon intensive fuels such as coal. Lifecycle emissions from heating For residential heating in almost all countries emissions from natural gas furnaces are more than from heat pumps. But in some countries, such as the UK, there is an ongoing debate in the 2020s about whether it is better to replace the natural gas used in residential central heating with hydrogen, or whether to use heat pumps or in some cases more district heating. Fossil gas bridge fuel controversy As of 2020 whether natural gas should be used as a "bridge" from coal and oil to low carbon energy, is being debated for coal-reliant economies, such as India, China and Germany. Germany, as part of its Energiewende transformation, declares preservation of coal-based power until 2038 but immediate shutdown of nuclear power plants, which further increased its dependency on fossil gas. Missing life cycle phases Although the life cycle assessments of each energy source should attempt to cover the full life cycle of the source from cradle-to-grave, they are generally limited to the construction and operation phase. The most rigorously studied phases are those of material and fuel mining, construction, operation, and waste management. However, missing life cycle phases exist for a number of energy sources. At times, assessments variably and sometimes inconsistently include the global warming potential that results from decommissioning the energy supplying facility, once it has reached its designed life-span. This includes the global warming potential of the process to return the power-supply site to greenfield status. For example, the process of hydroelectric dam removal is usually excluded as it is a rare practice with little practical data available. Dam removal however is becoming increasingly common as dams age. Larger dams, such as the Hoover Dam and the Three Gorges Dam, are intended to last "forever" with the aid of maintenance, a period that is not quantified. Therefore, decommissioning estimates are generally omitted for some energy sources, while other energy sources include a decommissioning phase in their assessments. Along with the other prominent values of the paper, the median value presented of 12 g CO2-eq/kWhe for nuclear fission, found in the 2012 Yale University nuclear power review, a paper which also serves as the origin of the 2014 IPCC's nuclear value, does however include the contribution of facility decommissioning with an "Added facility decommissioning" global warming potential in the full nuclear life cycle assessment.Thermal power plants, even if low carbon power biomass, nuclear or geothermal energy stations, directly add heat energy to the earth's global energy balance. As for wind turbines, they may change both horizontal and vertical atmospheric circulation. But, although both these may slightly change the local temperature, any difference they might make to the global temperature is undetectable against the far larger temperature change caused by greenhouse gases. See also Bioenergy with carbon capture and storage Carbon capture and storage Carbon footprint Climate change mitigation Efficient energy use Low-carbon economy Nuclear power proposed as renewable energy References External links National Renewable Energy Laboratory. LCA CO2 emissions of all present day energy sources. Wise uranium CO2 calculator
greenhouse gas emissions by russia
Greenhouse gas emissions by Russia are mostly from fossil gas, oil and coal. Russia emits 2: 17  or 3 billion tonnes CO2eq of greenhouse gases each year; about 4% of world emissions. Annual carbon dioxide emissions alone are about 12 tons per person, more than double the world average. Cutting greenhouse gas emissions, and therefore air pollution in Russia, would have health benefits greater than the cost. The country is the world's biggest methane emitter, and 4 billion dollars worth of methane was estimated to leak in 2019/20.Russia's greenhouse gas emissions decreased by 30% between 1990 and 2018, excluding emissions from land use, land-use change and forestry (LULUCF). Russia's goal is to reach net zero by 2060, but its energy strategy to 2035 is mostly about burning more fossil fuels. Sources Greenhouse gas emissions by Russia have great impact on climate change since the country is the fourth-largest greenhouse gas emitter in the world. Climate Trace estimate that 60% of the country's emissions comes from fossil fuel operations and 24% from the power sector. In 2017, Russia emitted 2155 Mt of CO2, while 578 Mt was reabsorbed by land use, land-use change, and forestry (LULUCF).2155 Mt of CO2 was emitted in 2017 but 578 Mt was reabsorbed by land use, land-use change, and forestry (LULUCF). Russia must submit its inventory of 2018 emissions to the UNFCCC by 15 April 2020, and so on for each calendar year.In 2017, Russia emitted 11.32 tons of CO2 per person. But according to the Washington Post methane emissions are under-reported. Energy In 2017 Russia's energy sector, which under IPCC guidelines includes fuel for transport, emitted almost 80% of the country's greenhouse gases. Industrial Processes and Product Use (IPPU) emitted over 10%. The largest emitters are energy industries—mainly electricity generation—followed by fugitive emissions from fuels, and then transport. According to Climate Trace the largest point source is Urengoyskoye gas field at over 150 Mt in 2021. Energy from fossil fuels Most emissions are from the energy sector burning fossil fuels. According to the Russian Science Foundation in 2019, the natural influx of greenhouse gases from terrestrial ecosystems in Russia constantly changes. Measuring these influxes had shown that greenhouses gases into the atmosphere in short time intervals is contributing to the deceleration of warming in Russia. This is attributed to the fact that the effect of temperature growth deceleration, due to absorption of CO2 by the terrestrial ecosystems from the atmosphere, is stronger than the effect of warming acceleration caused by the emission of CH4 into the atmosphere. The effect of terrestrial ecosystems contributing to the deceleration of global warming in the Russian regions grows in the first half of the 21st century and decreases by the end of the century upon reaching the maximum, depending on the scenario of anthropogenic emissions, under all studied scenarios of anthropogenic impacts resulting from the growth in natural emissions of CH4 and the decrease in CO2 absorption by the terrestrial ecosystems. In accordance with the results obtained, under the scenarios of anthropogenic emissions considered, the natural emissions from the Russian regions will also accelerate climate warming on the short time horizons under the climate conditions of the second half of the 21st century. Electricity generation Public information from space-based measurements of carbon dioxide by Climate Trace is expected to reveal individual large plants before the 2021 United Nations Climate Change Conference. Gas fired power stations Gas fired power stations are a major source. Agriculture In 2017, agriculture emitted 6% of Russia's greenhouse gases. Waste In 2017, waste emitted 4% of the country's greenhouse gases. Land Russian challenges for forests include control of illegal logging, corruption, forest fires and land use. As well as trees burning peat burning in wildfires emits carbon. Black carbon on Arctic snow and ice is a problem as it absorbs heat. Mitigation Energy In 2020, Russia released a draft long-term strategy, to reduce CO2 emissions by 33% by 2030 compared to 1990. It did not plan to reach net zero until as late as 2100. Reducing methane leaks would help, as Russia is the largest methane emitter. Industry Efforts to decarbonize steel and aluminium production were delayed by the Russo-Ukrainian war and international sanctions during the 2022 Russian invasion of Ukraine. Economics As Russia has no carbon tax or emissions trading it could be vulnerable to future carbon tariffs imposed by the EU, or other export partners. Carbon sinks Carbon sinks, which in Russia consist mainly of forests, offset about a quarter of national emissions in 2017. See also Climate Doctrine of the Russian Federation Energy policy of Russia Greenhouse gas inventory List of countries by carbon dioxide emissions Plug-in electric vehicles in Russia References External links UNFCCC Russia documents - see April NIR and CRF for figures for this article Live carbon emissions from electricity generation in European Russia and Ural Live carbon emissions from electricity generation in Siberia Greenhouse Gas Inventory Data - Flexible Queries Annex I Parties NDC Registry Climate Action Tracker: Russia Climate Watch: Russia
list of countries by carbon dioxide emissions
This is a list of sovereign states and territories by carbon dioxide emissions due to certain forms of human activity, based on the EDGAR database created by European Commission and Netherlands Environmental Assessment Agency. The following table lists the 1970, 1990, 2005, 2017 and 2022 annual CO2 emissions estimates (in kilotons of CO2 per year) along with a list of calculated emissions per capita (in tons of CO2 per year).The data only consider carbon dioxide emissions from the burning of fossil fuels and cement manufacture, but not emissions from land use, land-use change and forestry. Over the last 150 years, estimated cumulative emissions from land use and land-use change represent approximately one-third of total cumulative anthropogenic CO2 emissions. Emissions from international shipping or bunker fuels are also not included in national figures, which can make a large difference for small countries with important ports. In 2022, CO2 emissions from the top 10 countries with the highest emissions accounted for almost two thirds of the global total. Since 2006, China has been emitting more CO2 than any other country. However, the main disadvantage of measuring total national emissions is that it does not take population size into account. China has the largest CO2 emissions in the world, but also the largest population. For a fair comparison, emissions should be analyzed in terms of the amount of CO2 per capita. Considering CO2 per capita emissions in 2022, China's levels (8.85) are almost half those of the United States (14.44) and less than a sixth of those of Palau (59.00 - the country with the highest emissions of CO2 per capita).Measures of territorial-based emissions, also known as production-based emissions, do not account for emissions embedded in global trade, where emissions may be imported or exported in the form of traded goods, as it only reports emissions emitted within geographical boundaries. Accordingly, a proportion of the CO2 produced and reported in Asia and Africa is for the production of goods consumed in Europe and North America.The European Union is at the forefront of international efforts to reduce greenhouse gas emissions and thus safeguard the planet's climate. Greenhouse gases (GHG) – primarily carbon dioxide but also others, including methane and chlorofluorocarbons – trap heat in the atmosphere, leading to global warming. Higher temperatures then act on the climate, with varying effects. For example, dry regions might become drier while, at the poles, the ice caps are melting, causing higher sea levels. In 2016, the global average temperature was already 1.1°C above pre-industrial levels.According to the review of the scientific literature conducted by the Intergovernmental Panel on Climate Change (IPCC), carbon dioxide is the most important anthropogenic greenhouse gas by warming contribution. The other major anthropogenic greenhouse gases: 147 ) are not included in the following list, nor are humans emissions of water vapor (H2O), the most important greenhouse gases, as they are negligible compared to naturally occurring quantities. Space-based measurements of carbon dioxide should allow independent monitoring in the mid-2020s. Per capita CO2 emissions Fossil CO2 emissions by country/region The data in the following table is extracted from EDGAR - Emissions Database for Global Atmospheric Research. Maps and charts Notes References See also List of countries by carbon dioxide emissions per capita List of countries by carbon intensity of GDP List of countries by renewable electricity production List of countries by greenhouse gas emissions List of countries by greenhouse gas emissions per person Top contributors to greenhouse gas emissions United Nations | Sustainable Development Goal 13 - Climate actionGeneral: World energy supply and consumption External links UN Sustainable Development Knowledge Platform – The SDGs GHG data from UNFCCC – United Nations Framework Convention on Climate Change greenhouse gas (GHG) emissions data CO2 emissions in kilotons – World Bank CO2 emissions in metric tons per capita – Google Public Data Explorer
greenhouse gas inventory
Greenhouse gas inventories are emission inventories of greenhouse gas emissions that are developed for a variety of reasons. Scientists use inventories of natural and anthropogenic (human-caused) emissions as tools when developing atmospheric models. Policy makers use inventories to develop strategies and policies for emissions reductions and to track the progress of those policies. Regulatory agencies and corporations also rely on inventories to establish compliance records with allowable emission rates. Businesses, the public, and other interest groups use inventories to better understand the sources and trends in emissions. Unlike some other air emission inventories, greenhouse gas inventories include not only emissions from source categories, but also removals by carbon sinks. These removals are typically referred to as carbon sequestration. Greenhouse gas inventories typically use Global warming potential (GWP) values to combine emissions of various greenhouse gases into a single weighted value of emissions. Some of the key examples of greenhouse gas inventories include: All Annex I countries are required to report annual emissions and sinks of greenhouse gases under the United Nations Framework Convention on Climate Change (UNFCCC) National governments that are Parties to the UNFCCC and/or the Kyoto Protocol are required to submit annual inventories of all anthropogenic greenhouse gas emissions from sources and removals from sinks. The Kyoto Protocol includes additional requirements for national inventory systems, inventory reporting, and annual inventory review for determining compliance with Articles 5 and 8 of the Protocol. Project developers under the Clean Development Mechanism of the Kyoto Protocol prepare inventories as part of their project baselines. Scientific efforts aimed at understanding detail of total net carbon exchange. Example: Project Vulcan - a comprehensive US inventory of fossil-fuel greenhouse gas emissions. ISO 14064 The ISO 14064 standards (published in 2006 and early 2007) are the most recent additions to the ISO 14000 series of international standards for environmental management. The ISO 14064 standards provide governments, businesses, regions and other organisations with an integrated set of tools for programs aimed at measuring, quantifying and reducing greenhouse gas emissions. These standards allow organisations take part in emissions trading schemes using a globally recognised standard. Local Government Operations Protocol The Local Government Operations Protocol (LGOP) is a tool for accounting and reporting greenhouse gas emissions across a local government's operations. Adopted by the California Air Resources Board (ARB) in September 2008 for local governments to develop and report consistent GHG inventories to help meet California's AB 32 GHG reduction obligations, it was developed in partnership with California Climate Action Registry, The Climate Registry, ICLEI and dozens of stakeholders. The California Sustainability Alliance also created the Local Government Operations Protocol Toolkit, which breaks down the complexities of the LGOP manual and provides an area by area summary of the recommended inventory protocols. Know IPCC Format for GHG Emissions Inventory The data in the GHG emissions inventory is presented using the IPCC format (seven sectors presented using the Common Reporting Format, or CRF) as is all communication between Member States and the Secretariat of the United Nations Framework Convention on Climate Change (UNFCCC) and the Kyoto Protocol. Greenhouse gas emissions accounting Greenhouse gas emissions accounting is measuring the amount of greenhouse gases (GHG) emitted during a given period of time by a polity, usually a country but sometimes a region or city. Such measures are used to conduct climate science and climate policy. There are two main, conflicting ways of measuring GHG emissions: production-based (also known as territorial-based) and consumption-based. The Intergovernmental Panel on Climate Change defines production-based emissions as taking place “within national territory and offshore areas over which the country has jurisdiction”. Consumption-based emissions take into account the effects of trade, encompassing the emissions from domestic final consumption and those caused by the production of its imports. From the perspective of trade, consumption-based emissions accounting is thus the reverse of production-based emissions accounting, which includes exports but excludes imports (Table 1). The choice of accounting method can have very important effects on policymaking, as each measure can generate a very different result. Thus, different values for a National greenhouse gas Emissions Inventory (NEI) could result in a country choosing different optimal mitigation activities, the wrong choice based on wrong information being potentially damaging. The application of production-based emissions accounting is currently favoured in policy terms as it is easier to measure, although much of the scientific literature favours consumption-based accounting. The former method is criticised in the literature principally for its inability to allocate emissions embodied in international trade/transportation and the potential for carbon leakage.Almost all countries in the world are parties to the Paris Agreement, which requires them to provide regular production-based GHG emissions inventories to the United Nations Framework Convention on Climate Change (UNFCCC), in order to track both countries achievement of their nationally determined contributions and climate policies as well as regional climate policies such as the EU Emissions Trading Scheme (ETS), and the world's progress in limiting global warming. Under an earlier UNFCCC agreement greenhouse gas emissions by Turkey will continue to be inventoried even if it is not party to the Paris Agreement. Rationale It is now overwhelmingly accepted that the release of GHG, predominantly from the anthropogenic burning of fossil fuels and the release of direct emissions from agricultural activities, is accelerating the growth of these gases in the atmosphere resulting in climate change. Over the last few decades emissions have grown at an increasing rate from 1.0% yr−1 throughout the 1990s to 3.4% yr−1 between 2000 and 2008. These increases have been driven not only by a growing global population and per-capita GDP, but also by global increases in the energy intensity of GDP (energy per unit GDP) and the carbon intensity of energy (emissions per unit energy). These drivers are most apparent in developing markets (Kyoto non-Annex B countries), but what is less apparent is that a substantial fraction of the growth in these countries is to satisfy the demand of consumers in developed countries (Kyoto Annex B countries). This is exaggerated by a process known as Carbon Leakage whereby Annex B countries decrease domestic production in place of increased importation of products from non-Annex B countries where emission policies are less strict. Although this may seem the rational choice for consumers when considering local pollutants, consumers are inescapably affected by global pollutants such as GHG, irrespective of where production occurs. Although emissions have slowed since 2007 as a result of the global financial crisis, the longer-term trend of increased emissions is likely to resume. Today, much international effort is put into slowing the anthropogenic release of GHG and resulting climate change. In order to set benchmarks and emissions targets for - as well as monitor and evaluate the progress of - international and regional policies, the accurate measurement of each country's NEI becomes imperative. Measuring GHG emissions There are two main, conflicting ways of measuring GHG emissions: production-based (also known as territorial-based) and consumption-based. Production-based accounting As production-based emissions accounting is currently favoured in policy terms, its methodology is well established. Emissions are calculated not directly but indirectly from fossil fuel usage and other relevant processes such as industry and agriculture according to 2006 guidelines issued by the IPCC for GHG reporting. The guidelines span numerous methodologies dependent on the level of sophistication (Tiers 1–3 in Table 2). The simplest methodology combines the extent of human activity with a coefficient quantifying the emissions from that activity, known as an ‘emission factor’. For example, to estimate emissions from the energy sector (typically contributing over 90% of CO2 emissions and 75% of all GHG emissions in developed countries) the quantity of fuels combusted is combined with an emission factor - the level of sophistication increasing with the accuracy and complexity of the emission factor. Table 2 outlines how the UK implements these guidelines to estimate some of its emissions-producing activities. Consumption-based accounting Consumption-based emissions accounting has an equally established methodology using Input-Output Tables. These "display the interconnection between different sectors of production and allow for a tracing of the production and consumption in an economy" and were originally created for national economies. However, as production has become increasingly international and the import/export market between nations has flourished, Multi-Regional Input-Output (MRIO) models have been developed. The unique feature of MRIO is allowing a product to be traced across its production cycle, "quantifying the contributions to the value of the product from different economic sectors in various countries represented in the model. It hence offers a description of the global supply chains of products consumed". From this, assuming regional- and industry-specific data for CO2 emissions per unit of output are available, the total amount of emissions for the product can be calculated, and therefore the amount of emissions the final consumer is allocated responsibility for.The two methodologies of emissions accounting begin to expose their key differences. Production-based accounting is transparently consistent with GDP, whereas consumption-based accounting (more complex and uncertain) is consistent with national consumption and trade. However, the most important difference is that the latter covers global emissions - including those ‘embodied’ emissions that are omitted in production-based accounting - and offers globally based mitigation options. Thus the attribution of emissions embodied in international trade is the crux of the matter. Emissions embodied in international trade Figure 1 and Table 3 show extent of emissions embodied in international trade and thus their importance when attempting emissions reductions. Figure 1 shows the international trade flows of the top 10 countries with largest trade fluxes in 2004 and illustrates the dominance of trade from developing countries (principally China, Russia and India) to developed countries (principally USA, EU and Japan). Table 3 supports this showing that the traded emissions in 2008 total 7.8 gigatonnes (Gt) with a net CO2 emissions trade from developing to developed countries of 1.6 Gt. Table 3 also shows how these processes of production, consumption and trade have changed from 1990 (commonly chosen for baseline levels) to 2008. Global emissions have risen 39%, but in the same period developed countries seem to have stabilized their domestic emissions, whereas developing countries’ domestic emissions have doubled. This ‘stabilization’ is arguably misleading, however, if the increased trade from developing to developed countries is considered. This has increased from 0.4 Gt CO2 to 1.6 Gt CO2 - a 17%/year average growth meaning 16 Gt CO2 have been traded from developing to developed countries between 1990 and 2008. Assuming a proportion of the increased production in developing countries is to fulfil the consumption demands of developed countries, the process known as carbon leakage becomes evident. Thus, including international trade (i.e. the methodology of consumption-based accounting) reverses the apparent decreasing trend in emissions in developed countries, changing a 2% decrease (as calculated by production-based accounting) into a 7% increase across the time period. This point is only further emphasized when these trends are studied at a less aggregated scale. Figure 2 shows the percentage surplus of emissions as calculated by production-based accounting over consumption-based accounting. In general, production-based accounting proposes lower emissions for the EU and OECD countries (developed countries) and higher emissions for BRIC and rest of the world (developing countries). However, consumption-based accounting proposes the reverse with lower emissions in BRIC and RoW, and higher emissions in EU and OECD countries. This led Boitier to term EU and OECD ‘CO2 consumers’ and BRIC and RoW ‘CO2 producers’. The large difference in these results is corroborated by further analysis. The EU-27 in 1994 counted emissions using the consumption-based approach at 11% higher than those counted using the production-based approach, this difference rising to 24% in 2008. Similarly OECD countries reached a peak variance of 16% in 2006 whilst dropping to 14% in 2008. In contrast, although RoW starts and ends relatively equal, in the intervening years it is a clear CO2 producer, as are BRIC with an average consumption-based emissions deficit of 18.5% compared to production-based emissions. Peters and Hertwich completed a MRIO study to calculate emissions embodied in international trade using data from the 2001 Global Trade Analysis Program (GTAP). After manipulation, although their numbers are slightly more conservative (EU 14%; OECD 3%; BRIC 16%; RoW 6%) than Boitier the same trend is evident - developed countries are CO2 consumers and developing countries are CO2 producers. This trend is seen across the literature and supporting the use of consumption-based emissions accounting in policy-making decisions. Advantages and disadvantages of consumption-based accounting Advantages Consumption-based emissions accounting may be deemed superior as it incorporates embodied emissions currently ignored by the UNFCCC preferred production-based accounting. Other key advantages include: extending mitigation options, covering more global emissions through increased participation, and inherently encompassing policies such as the Clean Development Mechanism (CDM). Extending mitigation options Under the production-based system a country is punished for having a pollution intensive resource base. If this country has pollution intensive exports, such as Norway where 69% of its CO2 emissions are the result of production for export, a simple way to meet its emissions reductions set out under Kyoto would be to reduce its exports. Although this would be environmentally advantageous, it would be economically and politically harmful as exports are an important part of a country's GDP. However, by having appropriate mechanisms in place, such as a harmonized global tax, border-tax adjustment or quotas, a consumption-based accounting system could shift the comparative advantage towards a decision that includes environmental factors. The tax most discussed is based on the carbon content of the fossil fuels used to produce and transport the product, the greater the level of carbon used the more tax being charged. If a country did not voluntarily participate then a border tax could be imposed on them. This system would have the effect of embedding the cost of environmental load in the price of the product and therefore market forces would shift production to where it is economically and environmentally preferable, thus reducing GHG emissions Increasing participation In addition to reducing emissions directly this system may also alleviate competitiveness concerns in twofold ways: firstly, domestic and foreign producers are exposed to the same carbon tax; and secondly, if multiple countries are competing for the same export market they can promote environmental performance as a marketing tool. A loss of competitiveness resulting from the absence of legally binding commitments for non-Annex B countries was the principal reason the US and Australia, two heavily emitting countries, did not originally ratify the Kyoto protocol (Australia later ratified in 2007). By alleviating such concerns more countries may participate in future climate policies resulting in a greater percentage of global emissions being covered by legally binding reduction policies. Furthermore, as developed countries are currently expected to reduce their emissions more than developing countries, the more emissions are (fairly) attributed to developed countries the more they become covered by legally bound reduction policies. Peters argues that this last prediction means that consumption-based accounting would advantageously result in greater emissions reductions irrespective of increased participation. Encompassing policies such as the CDM The CDM is a flexible mechanism set up under the Kyoto Protocol with the aim of creating ‘Carbon Credits’ for trade in trading schemes such as the EU ETS. Despite coming under heavy criticism (see Evans, p134-135; and Burniaux et al., p58-65), the theory is that as the marginal cost of environmental abatement is lower in non-Annex B countries a scheme like this will promote technology transfer from Annex B to non-Annex B countries resulting in cheaper emissions reductions. Because under consumption-based emissions accounting a country is responsible for the emissions caused by its imports, it is important for the importing country to encourage good environmental behaviour and promote the cleanest production technologies available in the exporting country. Therefore, unlike the Kyoto Protocol where the CDM was added later, consumption-based emissions accounting inherently promotes clean development in the foreign country because of the way it allocates emissions. One loophole that remains relevant is carbon colonialism whereby developed countries do not mitigate the underlying problem but simply continue to increase consumption offsetting this by exploiting the abatement potential of developing countries. Disadvantages and implementation Despite its advantages consumption-based emissions accounting is not without its drawbacks. These were highlighted above and in Table 1 and are principally: greater uncertainty, greater complexity requiring more data not always available, and requiring greater international collaboration. Greater uncertainty and complexity Uncertainty derives from three main reasons: production-based accounting is much closer to statistical sources and GDP which are more assured; the methodology behind consumption-based accounting requires an extra step over production-based accounting, this step inherently incurring further doubt; and consumption-based accounting includes data from all trading partners of a particular country which will contain different levels of accuracy. The bulk of data required is its second pitfall as in some countries the lack of data means consumption-based accounting is not possible. However, it must be noted levels and accuracy of data will improve as more and better techniques are developed and the scientific community produce more data sets - examples including the recently launched global databases: EORA from the University of Sydney, EXIOPOL and WIOD databases from European consortia, and the Asian IDE-JETRO. In the short term it will be important to attempt to quantify the level of uncertainty more accurately. Greater international co-operation The third problem is that consumption-based accounting requires greater international collaboration to deliver effective results. A Government has the authority to implement policies only over emissions it directly generates. In consumption-based accounting emissions from different geo-political territories are allocated to the importing country. Although the importing country can indirectly oppose this by changing its importing habits or by applying a border tax as discussed, only by greater international collaboration, through an international dialogue such as the UNFCCC, can direct and meaningful emissions reductions be enforced. Sharing emissions responsibility Thus far it has been implied that one must implement either production-based accounting or consumption-based accounting. However, there are arguments that the answer lies somewhere in the middle i.e. emissions should be shared between the importing and exporting countries. This approach asserts that although it is the final consumer that ultimately initiates the production, the activities that create the product and associated pollution also contribute to the producing country's GDP. This topic is still developing in the literature principally through works by Rodrigues et al., Lenzen et al., Marques et al. as well as through empirical studies by such as Andrew and Forgie. Crucially it proposes that at each stage of the supply chain the emissions are shared by some pre-defined criteria between the different actors involved.Whilst this approach of sharing emissions responsibility seems advantageous, the controversy arises over what these pre-defined criteria should be. Two of the current front runners are Lenzen et al. who say “the share of responsibility allocated to each agent should be proportional to its value added” and Rodrigues et al. who say it should be based on “the average between an agent's consumption-based responsibility and income-based responsibility” (quoted in Marques et al.). As no criteria set has been adequately developed and further work is needed to produce a finished methodology for a potentially valuable concept. The future Measures of regions' GHG emissions are critical to climate policy. It is clear that production-based emissions accounting, the currently favoured method for policy-making, significantly underestimates the level of GHG emitted by excluding emissions embodied in international trade. Implementing consumption-based accounting which includes such emissions, developed countries take a greater share of GHG emissions and consequently the low level of emissions commitments for developing countries are not as important. Not only does consumption-based accounting encompass global emissions, it promotes good environmental behaviour and increases participation by reducing competitiveness. Despite these advantages the shift from production-based to consumption-based accounting arguably represents a shift from one extreme to another. The third option of sharing responsibility between importing and exporting countries represents a compromise between the two systems. However, as yet no adequately developed methodology exists for this third way, so further study is required before it can be implemented for policy-making decisions. Today, given its lower uncertainty, established methodology and reporting, consistency between political and environmental boundaries, and widespread implementation, it is hard to see any movement away from the favoured production-based accounting. However, because of its key disadvantage of omitting emissions embodied in international trade, it is clear that consumption-based accounting provides invaluable information and should at least be used as a ‘shadow’ to production-based accounting. With further work into the methodologies of consumption-based accounting and sharing emissions responsibility, both can play greater roles in the future of climate policy. See also Carbon footprint Environmental economics Global warming Kyoto Protocol Paris Agreement Greenhouse gas monitoring Greenhouse Gases Observing Satellite (GOSAT) (Ibuki) Sources Further reading Intergovernmental Panel on Climate Change (IPCC) national greenhouse gas inventory guidance manuals UNFCCC National Inventory process The GHG Protocol (WRI/WBCSD) - A corporate accounting and reporting standard ISO 14064 standards for greenhouse gas accounting and verification IPCC National Greenhouse Gas Inventories Programme U.S. EPA Greenhouse Gas Emission Inventories <- this link needs updating The Climate Registry California Climate Registry External links "Paris Reality Check: PRIMAP-crf". www.pik-potsdam.de. Retrieved 2020-04-16. National inventories of GHG emitted in 2021 (received by the UNFCCC in 2023) Greenhouse Gas Inventory Data – Flexible Queries Annex I Parties
list of countries by carbon dioxide emissions per capita
This is a list of sovereign states and territories by per capita carbon dioxide emissions due to certain forms of human activity, based on the EDGAR database created by European Commission. The following table lists the 1970, 1990, 2005, 2017 and 2022 annual per capita CO2 emissions estimates (in kilotons of CO2 per year).The data only consider carbon dioxide emissions from the burning of fossil fuels and cement manufacture, but not emissions from land use, land-use change and forestry Over the last 150 years, estimated cumulative emissions from land use and land-use change represent approximately one-third of total cumulative anthropogenic CO2 emissions. Emissions from international shipping or bunker fuels are also not included in national figures, which can make a large difference for small countries with important ports. The Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report finds that the "Agriculture, Forestry and Other Land Use (AFOLU)" sector on average, accounted for 13-21% of global total anthropogenic GHG emissions in the period 2010-2019. Land use change drivers net AFOLU CO2 emission fluxes, with deforestation being responsible for 45% of total AFOLU emissions. In addition to being a net carbon sink and source of GHG emissions, land plays an important role in climate through albedo effects, evapotranspiration, and aerosol loading through emissions of volatile organic compounds. The IPCC report finds that the LULUCF sector offers significant near-term mitigation potential while providing food, wood and other renewable resources as well as biodiversity conservation. Mitigation measures in forests and other natural ecosystems provide the largest share of the LULUCF mitigation potential between 2020 and 2050. Among various LULUCF activities, reducing deforestation has the largest potential to reduce anthropogenic GHG emissions, followed by carbon sequestration in agriculture and ecosystem restoration including afforestation and reforestation. Land use change emissions can be negative.According to Science for Policy report in 2023 by the Joint Research Centre (JRC - the European Commission’s science and knowledge service) and International Energy Agency (IEA), in 2022, global GHG emissions primarily consisted of CO2, resulting from the combustion of fossil fuels (71.6%).In 2022, CO2 emissions from the top 10 countries with the highest emissions accounted for almost two thirds of the global total. Since 2006, China has been emitting more CO2 than any other country. However, the main advantage of measuring total national emissions per capita is that it does take population size into account. China has the largest CO2 emissions in the world, but also the largest population. For a fair comparison, emissions should be analyzed in terms of the amount of CO2 per capita. Considering CO2 per capita emissions in 2022, China's levels (8.85) are almost half those of the United States (14.44) and less than a sixth of those of Palau (59.00 - the country with the highest emissions of CO2 per capita).Measures of territorial-based emissions, also known as production-based emissions, do not account for emissions embedded in global trade, where emissions may be imported or exported in the form of traded goods, as it only reports emissions emitted within geographical boundaries. Accordingly, a proportion of the CO2 produced and reported in Asia and Africa is for the production of goods consumed in Europe and North America.Greenhouse gases (GHG) – primarily carbon dioxide but also others, including methane and chlorofluorocarbons – trap heat in the atmosphere, leading to global warming. Higher temperatures then act on the climate, with varying effects. For example, dry regions might become drier while, at the poles, the ice caps are melting, causing higher sea levels. In 2016, the global average temperature was already 1.1°C above pre-industrial levels.According to the review of the scientific literature conducted by the Intergovernmental Panel on Climate Change (IPCC), carbon dioxide is the most important anthropogenic greenhouse gas by warming contribution. The other major anthropogenic greenhouse gases: 147 ) are not included in the following list, nor are humans emissions of water vapor (H2O), the most important greenhouse gases, as they are negligible compared to naturally occurring quantities. CO2 emissions Per capita CO2 emissions by country/territory The data in the following table is extracted from EDGAR - Emissions Database for Global Atmospheric Research. CO2 emissions per capita embedded in global trade CO2 emissions are typically measured on the basis of ‘production’. This accounting method – which is sometimes referred to as ‘territorial’ emissions – is used when countries report their emissions, and set targets domestically and internationally. In addition to the commonly reported production-based emissions statisticians also calculate ‘consumption-based’ emissions. These emissions are adjusted for trade. To calculate consumption-based emissions, traded goods are tracked across the world, and whenever a good was imported all CO2 emissions that were emitted in the production of that good are also imported, and vice versa to subtract all CO2 emissions that were emitted in the production of goods that were exported.Consumption-based emissions reflect the consumption and lifestyle choices of a country’s citizens. They are national or regional emissions that have been adjusted for trade, calculated as domestic (or ‘production-based’) emissions minus the emissions generated in the production of goods and services that are exported to other countries or regions, plus emissions from the production of goods and services that are imported. Consumption-based emissions = Production-based – Exported + Imported emissions This is measured as the net import-export balance in tons of CO2 per year. Positive values represent netimporters of CO2. Negative values represent net exporters of CO2.The data in the following table is extracted from Our World in Data database. Notes References See also List of countries by carbon dioxide emissions List of countries by greenhouse gas emissions List of countries by greenhouse gas emissions per capita Climate change Land use, land-use change, and forestry (LULUCF) List of countries by carbon intensity of GDP List of countries by renewable electricity production United Nations | Sustainable Development Goal 13 - Climate action External links UN Sustainable Development Knowledge Platform – The SDGs GHG data from UNFCCC – United Nations Framework Convention on Climate Change greenhouse gas (GHG) emissions data Total greenhouse gas emissions (kt of CO2 equivalent) – World Bank CO2 emissions in metric tons per capita – Google Public Data Explorer
global warming potential
Global warming potential or greenhouse warming potential (GWP) is a measure of how much infrared thermal radiation a greenhouse gas added to the atmosphere would absorb over a given time frame, as a multiple of the radiation that would be absorbed by the same mass of added carbon dioxide (CO2). GWP is 1 for CO2. For other gases it depends on how strongly the gas absorbs infrared thermal radiation, how quickly the gas leaves the atmosphere, and the time frame being considered. The carbon dioxide equivalent (CO2e or CO2eq or CO2-e) is calculated from GWP. For any gas, it is the mass of CO2 that would warm the earth as much as the mass of that gas. Thus it provides a common scale for measuring the climate effects of different gases. It is calculated as GWP times mass of the other gas. Methane has GWP (over 20 years) of 81.2 meaning that, for example, a leak of a tonne of methane is equivalent to emitting 81.2 tonnes of carbon dioxide. Similarly a tonne of nitrous oxide, from manure or paddy fields for example, is equivalent to 273 tonnes of carbon dioxide.: 7SM-24 Values Carbon dioxide is the reference. It has a GWP of 1 regardless of the time period used. CO2 emissions cause increases in atmospheric concentrations of CO2 that will last thousands of years. Estimates of GWP values over 20, 100 and 500 years are periodically compiled and revised in reports from the Intergovernmental Panel on Climate Change. The most recent report is the IPCC Sixth Assessment Report (Working Group I) from 2023. Earlier reports were the Second Assessment Report (1995), Third Assessment Report (2001), Fourth Assessment Report (2007) and Fifth Assessment Report (2013).Though recent reports reflect more scientific accuracy, countries and companies continue to use SAR and AR4 values for reasons of comparison in their emission reports. AR5 has skipped 500 year values but introduced GWP estimations including the climate-carbon feedback (f) with a large amount of uncertainty. The IPCC lists many other substances not shown here. Some have high GWP but only a low concentration in the atmosphere. The total impact of all fluorinated gases is estimated at 3% of all greenhouse gas emissions.The values given in the table assume the same mass of compound is analyzed; different ratios will result from the conversion of one substance to another. For instance, burning methane to carbon dioxide would reduce the global warming impact, but by a smaller factor than 25:1 because the mass of methane burned is less than the mass of carbon dioxide released (ratio 1:2.74). For a starting amount of 1 tonne of methane, which has a GWP of 25, after combustion there would be 2.74 tonnes of CO2, each tonne of which has a GWP of 1. This is a net reduction of 22.26 tonnes of GWP, reducing the global warming effect by a ratio of 25:2.74 (approximately 9 times). Use in Kyoto Protocol and UNFCCC Under the Kyoto Protocol, in 1997 the Conference of the Parties standardized international reporting, by deciding (decision 2/CP.3) that the values of GWP calculated for the IPCC Second Assessment Report were to be used for converting the various greenhouse gas emissions into comparable CO2 equivalents.After some intermediate updates, in 2013 this standard was updated by the Warsaw meeting of the UN Framework Convention on Climate Change (UNFCCC, decision 24/CP.19) to require using a new set of 100-year GWP values. They published these values in Annex III, and they took them from the 4th Assessment Report of the Intergovernmental Panel on Climate Change, which had been published in 2007.Those 2007 estimates are still used for international comparisons through 2020, although the latest research on warming effects has found other values, as shown in the table above. Importance of time horizon A substance's GWP depends on the number of years (denoted by a subscript) over which the potential is calculated. A gas which is quickly removed from the atmosphere may initially have a large effect, but for longer time periods, as it has been removed, it becomes less important. Thus methane has a potential of 25 over 100 years (GWP100 = 25) but 86 over 20 years (GWP20 = 86); conversely sulfur hexafluoride has a GWP of 22,800 over 100 years but 16,300 over 20 years (IPCC Third Assessment Report). The GWP value depends on how the gas concentration decays over time in the atmosphere. This is often not precisely known and hence the values should not be considered exact. For this reason when quoting a GWP it is important to give a reference to the calculation. The GWP for a mixture of gases can be obtained from the mass-fraction-weighted average of the GWPs of the individual gases.Commonly, a time horizon of 100 years is used by regulators. Water vapour Water vapour does contribute to anthropogenic global warming, but as the GWP is defined, it is negligible for H2O: an estimate gives a 100-year GWP between -0.001 and 0.0005.H2O can function as a greenhouse gas because it has a profound infrared absorption spectrum with more and broader absorption bands than CO2. Its concentration in the atmosphere is limited by air temperature, so that radiative forcing by water vapour increases with global warming (positive feedback). But the GWP definition excludes indirect effects. GWP definition is also based on emissions, and anthropogenic emissions of water vapour (cooling towers, irrigation) are removed via precipitation within weeks, so its GWP is negligible. Criticism and other metrics The Global Temperature change Potential (GTP) is another way to compare gases. While GWP estimates infrared thermal radiation absorbed, GTP estimates the resulting rise in average surface temperature of the world, over the next 20, 50 or 100 years, caused by a greenhouse gas, relative to the temperature rise which the same mass of CO2 would cause. Calculation of GTP requires modeling how the world, especially the oceans, will absorb heat. GTP is published in the same IPCC tables with GWP.GWP* has been proposed to take better account of short-lived climate pollutants (SLCP) such as methane, relating a change in the rate of emissions of SLCPs to a fixed quantity of CO2. However GWP* has itself been criticised both for its suitability as a metric and for inherent design features which can perpetuate injustices and inequity. Calculating the global warming potential The GWP depends on the following factors: the absorption of infrared radiation by a given gas the time horizon of interest (integration period) the atmospheric lifetime of the gasA high GWP correlates with a large infrared absorption and a long atmospheric lifetime. The dependence of GWP on the wavelength of absorption is more complicated. Even if a gas absorbs radiation efficiently at a certain wavelength, this may not affect its GWP much if the atmosphere already absorbs most radiation at that wavelength. A gas has the most effect if it absorbs in a "window" of wavelengths where the atmosphere is fairly transparent. The dependence of GWP as a function of wavelength has been found empirically and published as a graph.Because the GWP of a greenhouse gas depends directly on its infrared spectrum, the use of infrared spectroscopy to study greenhouse gases is centrally important in the effort to understand the impact of human activities on global climate change. Just as radiative forcing provides a simplified means of comparing the various factors that are believed to influence the climate system to one another, global warming potentials (GWPs) are one type of simplified index based upon radiative properties that can be used to estimate the potential future impacts of emissions of different gases upon the climate system in a relative sense. GWP is based on a number of factors, including the radiative efficiency (infrared-absorbing ability) of each gas relative to that of carbon dioxide, as well as the decay rate of each gas (the amount removed from the atmosphere over a given number of years) relative to that of carbon dioxide.The radiative forcing capacity (RF) is the amount of energy per unit area, per unit time, absorbed by the greenhouse gas, that would otherwise be lost to space. It can be expressed by the formula: where the subscript i represents a wavenumber interval of 10 inverse centimeters. Absi represents the integrated infrared absorbance of the sample in that interval, and Fi represents the RF for that interval.The Intergovernmental Panel on Climate Change (IPCC) provides the generally accepted values for GWP, which changed slightly between 1996 and 2001, except for methane, which had his GWP almost doubled. An exact definition of how GWP is calculated is to be found in the IPCC's 2001 Third Assessment Report. The GWP is defined as the ratio of the time-integrated radiative forcing from the instantaneous release of 1 kg of a trace substance relative to that of 1 kg of a reference gas: where TH is the time horizon over which the calculation is considered; ax is the radiative efficiency due to a unit increase in atmospheric abundance of the substance (i.e., Wm−2 kg−1) and [x](t) is the time-dependent decay in abundance of the substance following an instantaneous release of it at time t=0. The denominator contains the corresponding quantities for the reference gas (i.e. CO2). The radiative efficiencies ax and ar are not necessarily constant over time. While the absorption of infrared radiation by many greenhouse gases varies linearly with their abundance, a few important ones display non-linear behaviour for current and likely future abundances (e.g., CO2, CH4, and N2O). For those gases, the relative radiative forcing will depend upon abundance and hence upon the future scenario adopted. Since all GWP calculations are a comparison to CO2 which is non-linear, all GWP values are affected. Assuming otherwise as is done above will lead to lower GWPs for other gases than a more detailed approach would. Clarifying this, while increasing CO2 has less and less effect on radiative absorption as ppm concentrations rise, more powerful greenhouse gases like methane and nitrous oxide have different thermal absorption frequencies to CO2 that are not filled up (saturated) as much as CO2, so rising ppms of these gases are far more significant. Carbon dioxide equivalent Carbon dioxide equivalent (CO2e or CO2eq or CO2-e) of a quantity of gas is calculated from its GWP. For any gas, it is the mass of CO2 which would warm the earth as much as the mass of that gas. Thus it provides a common scale for measuring the climate effects of different gases. It is calculated as GWP multiplied by mass of the other gas. For example, if a gas has GWP of 100, two tonnes of the gas have CO2e of 200 tonnes, and 9 tonnes of the gas has CO2e of 900 tonnes. On a global scale, the warming effects of one or more greenhouse gases in the atmosphere can also be expressed as an equivalent atmospheric concentration of CO2. CO2e can then be the atmospheric concentration of CO2 which would warm the earth as much as a particular concentration of some other gas or of all gases and aerosols in the atmosphere. For example, CO2e of 500 parts per million would reflect a mix of atmospheric gases which warm the earth as much as 500 parts per million of CO2 would warm it. Calculation of the equivalent atmospheric concentration of CO2 of an atmospheric greenhouse gas or aerosol is more complex and involves the atmospheric concentrations of those gases, their GWPs, and the ratios of their molar masses to the molar mass of CO2. CO2e calculations depend on the time-scale chosen, typically 100 years or 20 years, since gases decay in the atmosphere or are absorbed naturally, at different rates. The following units are commonly used: By the UN climate change panel (IPCC): billion metric tonnes = n×109 tonnes of CO2 equivalent (GtCO2eq) In industry: million metric tonnes of carbon dioxide equivalents (MMTCDE) and MMT CO2eq. For vehicles: grams of carbon dioxide equivalent per mile (gCO2e/mile) or per kilometer (gCO2e/km)For example, the table above shows GWP for methane over 20 years at 86 and nitrous oxide at 289, so emissions of 1 million tonnes of methane or nitrous oxide are equivalent to emissions of 86 or 289 million tonnes of carbon dioxide, respectively. See also Carbon accounting Carbon footprint Emission intensity List of refrigerants Radiative forcing Total equivalent warming impact Vehicle emission standard References Notes Sources IPCC reports Schimel, D.; Alves, D.; Enting, I.; Heimann, M.; et al. (1995). "Chapter 2: Radiative Forcing of Climate Change". Climate Change 1995: The Science of Climate Change. Contribution of Working Group I to the Second Assessment Report of the Intergovernmental Panel on Climate Change. pp. 65–132. Ramaswamy, V.; Boucher, O.; Haigh, J.; Hauglustaine, D.; et al. (2001). "Chapter 6: Radiative Forcing of Climate Change". Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change. pp. 349–416. Forster, P.; Ramaswamy, V.; Artaxo, P.; Berntsen, T.; et al. (2007). "Chapter 2: Changes in Atmospheric Constituents and Radiative Forcing" (PDF). Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. pp. 129–234. Myhre, G.; Shindell, D.; Bréon, F.-M.; Collins, W.; et al. (2013). "Chapter 8: Anthropogenic and Natural Radiative Forcing" (PDF). Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. pp. 659–740. IPCC (2021). Masson-Delmotte, V.; Zhai, P.; Pirani, A.; Connors, S. L.; et al. (eds.). Climate Change 2021: The Physical Science Basis (PDF). Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press (In Press). Forster, Piers; Storelvmo, Trude (2021). "Chapter 7: The Earth's Energy Budget, Climate Feedbacks, and Climate Sensitivity" (PDF). IPCC AR6 WG1 2021. Other sources Alvarez (2018). "Assessment of methane emissions from the U.S. oil and gas supply chain". Science. 361 (6398): 186–188. Bibcode:2018Sci...361..186A. doi:10.1126/science.aar7204. PMC 6223263. PMID 29930092. Etminan, M.; Myhre, G.; Highwood, E. J.; Shine, K. P. (2016-12-28). "Radiative forcing of carbon dioxide, methane, and nitrous oxide: A significant revision of the methane radiative forcing: Greenhouse Gas Radiative Forcing". Geophysical Research Letters. 43 (24): 12, 614–12, 623. Bibcode:2016GeoRL..4312614E. doi:10.1002/2016GL071930. Warwick, Nicola; Griffiths, Paul; Keeble, James; Archibald, Alexander; John, Pile (2022-04-08). Atmospheric implications of increased hydrogen use (Report). UK Department for Business, Energy & Industrial Strategy (BEIS). Morton, Adam (2020-08-26). "Methane released in gas production means Australia's emissions may be 10% higher than reported". The Guardian. ISSN 0261-3077. Retrieved 2020-08-26. Olivier, J.G.J.; Peters, J.A.H.W. (2020). Trends in global CO2 and total greenhouse gas emissions (2020) (PDF) (Report). The Hague: PBL Netherlands Environmental Assessment Agency. Archived (PDF) from the original on 2021-03-17. External links List of Global Warming Potentials and Atmospheric Lifetimes from the U.S. EPA GWP and the different meanings of CO2e explained Bibliography Gohar, L. K.; Shine, K. P. (November 2007). "Equivalent CO2 and its use in understanding the climate effects of increased greenhouse gas concentrations". Weather. Royal Meteorological Society. 62 (11): 307–311. Bibcode:2007Wthr...62..307G. doi:10.1002/wea.103. ISSN 1477-8696. S2CID 121065920.
greenhouse gas emissions by the united kingdom
In 2021, net greenhouse gas (GHG) emissions in the United Kingdom (UK) were 427 million tonnes (Mt) carbon dioxide equivalent (CO2e), 80% of which was carbon dioxide (CO2) itself. Emissions increased by 5% in 2021 with the easing of COVID-19 restrictions, primarily due to the extra road transport. The UK has over time emitted about 3% of the world total human caused CO2, with a current rate under 1%, although the population is less than 1%.Emissions decreased in the 2010s due to the closure of almost all coal-fired power stations. In 2020 emissions per person were somewhat over 6 tonnes when measured by the international standard production based greenhouse gas inventory, near the global average. But consumption based emissions include GHG due to imports and aviation so are much larger, about 10 tonnes per person per year.The UK has committed to carbon neutrality by 2050 and the Energy and Climate Intelligence Unit has said it would be affordable. The target for 2030 is a 68% reduction compared with 1990 levels. The UK has been successful in keeping its economic growth alongside taking climate change action. Since 1990, the UK’s greenhouse gas emissions have reduced by 44% while the economy has grown by around 75% up until 2019. One of the methods of reducing emissions is the UK Emissions Trading Scheme.Meeting future carbon budgets will require reducing emissions by at least 3% a year. At the 2021 United Nations Climate Change Conference the Prime Minister said the government would not be "lagging on lagging", but in 2022 the opposition said Britain was badly behind in such home insulation. The Committee on Climate Change, an independent body which advises the UK and devolved government, has recommended hundreds of actions to the government, including better energy efficiency, such as in housing. Monitoring, verification and reporting Although carbon dioxide is the main GHG methane (CH4), nitrous oxide (N2O), Hydro-flourocarbons (HFC), Perflurocarbons (PFC), Nitrogen trifluoride (NF3) and Sulphur hexafluoride (SF6) are included. Land use, land-use change, and forestry is the most uncertain sector.: 42 Cumulative emissions Cumulative CO2 emissions since 1750 are estimated to be around 80 billion tonnes, about 3% of the world total. As well as coal burnt during and since the Industrial Revolution, destruction of forests also contributed. Emissions by sector Transport Transport is the most emitting sector, the Department for Energy Security and Net Zero estimated that it was responsible for about 26% of GHG in 2021. This is mainly due to road vehicles, and particularly cars, burning petrol and diesel, and has declined since 1990 despite the number of vehicles increasing, because of improvements in fuel efficiency in both petrol and diesel cars.Transport was significantly impacted by COVID-19, as people were instructed to stay at home as much as possible. In 2020, territorial carbon dioxide emissions from the transport sector were 97.2 Mt, 19.6% (23.7 Mt) lower than in 2019, and 22.5% lower than in 1990. In 2020 transport accounted for 29.8% of all territorial carbon dioxide emissions, compared to 33.1% in 2019. The large majority of emissions from transport are from road transport.Jet zero is the strategy to get to zero aviation emissions by 2050. Energy supply Energy in the United Kingdom emitted about a fifth of GHG in 2021, mainly by burning gas to generate electricity.: 18 Gas There were 50 enterprises in the United Kingdom oil and gas extraction industry with an annual turnover of more than five million pounds as of 2021. Extracting North Sea oil and gas is estimated to directly emit 3.5% of UK GHG. Environmental activists say there should be no new gas fired power stations in the UK. Biomass As of 2021 the net GhG and climate change effects of biomass fuel are still being researched and debated: one large user is Drax Power Station which aims to be carbon negative, but green groups dispute their carbon accounting and say that forests would not regrow quickly enough. Coal UK will phase-out coal in 2024. UK's Eggborough's plant was closed in 2018. The UK had two weeks in May 2019 with all its coal plants switched off for the first time since the Industrial Revolution began. Business Territorial carbon dioxide emissions from the business sector were estimated to be 59.4 Mt in 2020 and accounted for around 18.2% of all carbon dioxide emissions. There has been a 46.8% decrease in business sector emissions since 1990. Most of this decrease came between 2001 and 2009, with a significant drop in 2009 likely to have been driven by economic factors. The Humber industrial region is the UK's most emitting region, at 12 million tonnes of CO2 per year.A 2020 study suggest half of UK's 'true carbon footprint' is created abroad, a large percentage of this can be attributed to imports entering the UK from other countries. International travel can also be included in this sector. Residential In 2021, the residential sector emitted almost 70 Mt CO2e, accounting for 16% of all GHG emissions. The main source of emissions in these sectors is the use of natural gas for heating (and for cooking in the case of the residential sector). Therefore the amount various by year depending on the weather.: 21  Emissions from these sectors do not include emissions from the generation of electricity consumed, as these emissions are included in the energy supply sector.Governments have been criticised for support for home insulation being stop-start. At COP26 Boris Johnson said the UK would not be lagging on lagging but the government was later criticised as doing that. Land use Agriculture Agriculture is responsible for a tenth of emissions. 68% of total nitrous oxide emissions 47% of total methane emissions 1.7% of total carbon dioxide emissions Peat UK peatlands such as the Great North Bog cover around 23,000 km2 or 9.5% of the UK land area and store at least 3.2 billion tonnes of carbon. A loss of only 5% of UK peatland carbon would equate to the total annual UK anthropogenic greenhouse gas emissions. Healthy peat bogs have a net long-term ‘cooling’ effect on the climate. Peatlands rely on water. When drained, peatlands waste away through oxidation, adding carbon dioxide to the atmosphere. Damaged and degraded peatlands place a substantial financial burden on society because of increased greenhouse gas emissions, poorer water quality and loss of other ecosystem services. The Wildlife Trusts say that selling peat should be banned. Heath Woody plant encroachment occurring in heathlands, can lead to the release of soil organic carbon, which is not offset by the growth in woody biomass above ground. The removal of conifers from afforested heathland is a recommended measure to reverse this trend. Seas MPs say that bottom trawling and dredging is harmful and should be banned in marine protected areas. Mitigation UK carbon neutral plan The sectoral graph excludes carbon emissions from international aviation and international shipping, which together rose by 74.2% from 22.65 to 39.45 million tonnes of carbon dioxide between 1990 and 2004. Reductions in methane emissions are largely due to a decline in the country's coal industry and to improved landfilling technologies.The Climate Change Act 2008 set the country's emission reduction targets. Before 2019 the UK was legally bound by the Climate Change Act to reduce emissions 80% by 2050, but a new law mandating a 100% cut was under discussion in 2019. According to the Committee on Climate Change, the UK can cut its carbon emissions down to near zero and so become carbon neutral, at no extra cost if done gradually from 2019 to 2050. The law was adopted by the parliament in June 2019.The "legally binding" targets are a reduction of at least 100% by 2050 (against the 1990 baseline).It also mandates interim, 5-year budgets, which are: Criticism of targets Production targets have been criticised for ignoring the emissions embodied in imports, thereby attributing them to other countries, such as China. Including these gives a total for consumption based GHG emissions, also called the UK carbon footprint, of about 650 Mt a year. Tax policy Businesses and employees are given tax breaks for electric cars and a much larger proportion of business vehicle purchases are electric than those of consumers. But it is hoped increased supply of used fleet electric cars will eventually result in affordable second-hand electric cars for private buyers, as purchase price is still a barrier for many consumers. It has been suggested that value added tax (VAT) on natural gas used for heating should be raised from 5% to the usual 20% and the proceeds used to help poor people. Emissions Trading Transport The Government is developing a plan to accelerate the decarbonisation of transport. The Transport Decarbonisation Plan (TDP) will set out in detail what government, business and society will need to do to deliver the significant emissions reduction needed across all modes of transport, putting us on a pathway to achieving carbon budgets and net zero emissions across every single mode of transport by 2050.Sales of non-electric cars will end by 2030 and hybrids by 2035. Residential On the domestic level, the UK aims to reduce direct CO2 emissions from homes by 24% by 2030. There are several ways to achieve this goal, such as home insulation, the installation of heat pumps, and the use of renewable energy such as solar panels.As of 2022 the installation cost of a heat pump is more than a gas boiler, but with the government grant and assuming electricity/gas costs remain similar their lifetime costs would be similar. However the share of heatpumps in the UK is far below the European average.More waste heat could be saved and used - for example in London. Agriculture Since departure from the EU Common Agricultural Policy the Agriculture Bill was passed for agriculture in the United Kingdom.The most common actions to reduce GHG emissions were recycling waste materials, improving nitrogen fertiliser application and improving energy efficiency. These are actions that are relevant to most farm enterprises. Those actions more suited to livestock enterprises had a lower level of uptake. The 2021 Farm Practices Survey (FPS) indicated that 67% of farmers thought it important to consider GHGs when making farm business decisions, whilst 27% considered it not important. See also Climate change in the United Kingdom Energy Company Obligation Plug-in electric vehicles in the United Kingdom Notes References External links UK Climate Change Act and actual Green House Gas emissions
european union emissions trading system
The European Union Emissions Trading System (EU ETS) is a carbon emission trading scheme (or cap and trade scheme) which began in 2005 and is intended to lower greenhouse gas emissions by the European Union countries. Cap and trade schemes limit emissions of specified pollutants over an area and allow companies to trade emissions rights within that area. The EU ETS covers around 45% of the EUs greenhouse gas emissions.The scheme has been divided into four "trading periods". The first ETS trading period lasted three years, from January 2005 to December 2007. The second trading period ran from January 2008 until December 2012, coinciding with the first commitment period of the Kyoto Protocol. The third trading period lasted from January 2013 to December 2020. Compared to 2005, when the EU ETS was first implemented, the proposed caps for 2020 represents a 21% reduction of greenhouse gases. This target has been reached six years early as emissions in the ETS fell to 1.812 billion (109) tonnes in 2014.The fourth phase started in January 2021 and will continue until December 2030. The emission reductions to be achieved over this period are unclear as of November 2021, as the European Green Deal necessitates tightening of the current EU ETS reduction target for 2030 of −43% with respect to 2005. The EU commission proposes in its "Fit for 55" package to increase the EU ETS reduction target for 2030 to −61% compared to 2005.EU countries view the emissions trading scheme as necessary to meeting climate goals. A strong carbon market guides investors and industry in their transition from fossil fuels. A 2020 study found that the EU ETS successfully reduced CO2 emissions even though the prices for carbon were set at low prices. A 2023 study on the effects of the EU ETS identified a reduction in carbon emissions in the order of -10% between 2005 and 2012 with no impacts on profits or employment for regulated firms. The price of EU allowances exceeded 100€/tCO2 ($118) in February 2023. Setup The EU Emission Trading System follows the cap and trade model where one allowance permits the holder to emit 1 ton of CO2 (tCO2). Under this scheme, a maximum (cap) is set on the total amount of greenhouse gases that can be emitted by all participating installations. EU Allowances for emissions are then auctioned off or allocated for free, and can subsequently be traded. Installations must monitor and report their CO2 emissions, ensuring they hand in enough allowances to the authorities to cover their emissions. In order to exceed its emissions allowance, an installation must purchase allowances from others. Conversely, if an installation emits less than its allowance, it can sell its leftover credits. This allows the system to find the most cost-effective ways of reducing emissions without significant government intervention.The scheme was said to cover energy and heat generation industries and around 11,186 plants participated in the first stage. These plants only accounted for 45% of all European emissions at the time. More than 90% of all these allowances were free of cost in both periods to build a strong base of abatements for the future phases. This free allocation resulted in the volume and value of allowances growing three-fold over 2006 with the price moving from €19/tCO2 in 2005 to its peak of €30/tCO2 which revealed a new problem. The overallocation of allowances caused the price to drop to €1/tCO2 in the first few months of 2007 which created market price instabilities for businesses to reinvest in low carbon technologies. The European Union Emission Trading Scheme (or EU-ETS) is the largest multi-national, greenhouse gas emissions trading scheme in the world. After voluntary trials in the UK and Denmark, Phase I began operation in January 2005 with all 15 member states of the European Union participating. The program caps the amount of carbon dioxide that can be emitted from large installations with a net heat supply in excess of 20 MW, such as power plants and carbon intensive factories, and covers almost half (46%) of the EU's Carbon Dioxide emissions. Phase I permits participants to trade among themselves and in validated credits from the developing world through Kyoto's Clean Development Mechanism. Credits are gained by investing in clean technologies and low-carbon solutions, and by certain types of emission-saving projects around the world to cover a proportion of their emissions. History The EU-ETS was the first large greenhouse gas emissions trading scheme in the world. It was launched in 2005 to fight global warming and is a major pillar of EU energy policy. As of 2013, the EU ETS covers more than 11,000 factories, power stations, and other installations with a net heat excess of 20 MW in 31 countries—all 27 EU member states plus Iceland, Norway, Liechtenstein and United Kingdom. In 2008, the installations regulated by the EU ETS were collectively responsible for close to half of the EU's anthropogenic emissions of CO2 and 40% of its total greenhouse gas emissions. The EU had set a target for 2020 to cut greenhouse gas emissions by 20% compared with 1990, to reduce energy consumption by 20% compared to the 2007 baseline scenario, and to achieve a 20% share of gross final energy consumption from renewable energy sources—all of which was achieved. A 2020 study estimated that the EU ETS had reduced CO2 emissions by more than 1 billion tons between 2008 and 2016 or 3.8% of total EU-wide emissions.The EU ETS has seen a number of significant changes, with the first trading period described as a "learning by doing" phase. Phase III saw a turn to auctioning more permits rather than allocating freely (in 2013, over 40% of the allowances were auctioned); harmonisation of rules for the remaining allocations; and the inclusion of other greenhouse gases, such as nitrous oxide and perfluorocarbons. In 2012, the EU ETS was also extended to the airline industry, though this only applies within the EEA. The price of EU ETS carbon credits has been lower than intended, with a large surplus of allowances, in part because of the impact of the recent economic crisis on demand. In 2012, the Commission said it would delay the auctioning of some allowances. In 2015, the EU passed the decision (EU) 2015/1814 to establish a Market Stability Reserve that adjusts the annual supply of CO2 permits based on the CO2 permits in circulation in the previous year. In 2018, the Market Stability Reserve was amended by Directive (EU) 2018/410 so that a certain amount of permits inside the reserve would be cancelled from 2023 onwards. In January 2008, Norway, Iceland, and Liechtenstein joined the European Union Emissions Trading System (EU-ETS). The Norwegian Ministry of the Environment has also released its draft National Allocation Plan which provides a carbon cap-and-trade of 15 million tonnes of CO2, 8 million of which are set to be auctioned. According to the OECD Economic Survey of Norway 2010, the nation "has announced a target for 2008–12 10% below its commitment under the Kyoto Protocol and a 30% cut compared with 1990 by 2020." In 2012, EU-15 emissions was 15.1% below their base year level. Based on figures for 2012 by the European Environment Agency, EU-15 emissions averaged 11.8% below base-year levels during the 2008–2012 period. This means the EU-15 over-achieved its first Kyoto target by a wide margin. Mechanisms The first phase of EU ETS was created to operate apart from international climate change treaties such as the pre-existing United Nations Framework Convention on Climate Change (UNFCCC, 1992) or the Kyoto Protocol that was subsequently (1997) established under it. When the Kyoto Protocol came into force on 16 February 2005, Phase I of the EU ETS had already become operational. The EU later agreed to incorporate Kyoto flexible mechanism certificates as compliance tools within the EU ETS. The "Linking Directive" allows operators to use a certain amount of Kyoto certificates from flexible mechanism projects to cover their emissions. The Kyoto flexible mechanisms are: Joint Implementation projects (JI) defined by Article 6 of the Kyoto Protocol, which produce Emission Reduction Units (ERUs). One ERU represents the successful emissions reduction equivalent to one tonne of carbon dioxide equivalent (tCO2e). the Clean Development Mechanism (CDM) defined by Article 12, which produces Certified Emission Reductions (CERs). One CER represents the successful emissions reduction equivalent to one tonne of carbon dioxide equivalent (tCO2e). International Emissions Trading (IET) defined by Article 17.IET is relevant as the reductions achieved through CDM projects are a compliance tool for EU ETS operators. These Certified Emission Reductions (CERs) can be obtained by implementing emission reduction projects in developing countries, outside the EU, that have ratified (or acceded to) the Kyoto Protocol. The implementation of Clean Development Projects is largely specified by the Marrakech Accords, a follow-on set of agreements by the Conference of the Parties to the Kyoto Protocol. The legislators of the EU ETS drew up the scheme independently but called on the experiences gained during the running of the voluntary UK Emissions Trading Scheme in the previous years, and collaborated with other parties to ensure its units and mechanisms were compatible with the design agreed through the UNFCCC. Under the EU ETS, the governments of the EU Member States agree on national emission caps which have to be approved by the EU commission. Those countries then allocate allowances to their industrial operators, and track and validate the actual emissions in accordance with the relevant assigned amount. They require the allowances to be retired after the end of each year. The operators within the ETS may reassign or trade their allowances by several means: privately, moving allowances between operators within a company and across national borders over the counter, using a broker to privately match buyers and sellers trading on the spot market of one of Europe's climate exchangesLike any other financial instrument, trading consists of matching buyers and sellers between members of the exchange and then settling by depositing a valid allowance in exchange for the agreed financial consideration. Much like a stock market, companies and private individuals can trade through brokers who are listed on the exchange, and need not be regulated operators. When each change of ownership of an allowance is proposed, the national Emissions Trading Registry and the European Commission are informed in order for them to validate the transaction. During Phase II of the EU ETS, the UNFCCC also validates the allowance and any change that alters the distribution within each national allocation plan.: 11 Like the Kyoto trading scheme, EU ETS allows a regulated operator to use carbon credits in the form of Emission Reduction Units (ERU) to comply with its obligations. A Kyoto Certified Emission Reduction unit (CER), produced by a carbon project that has been certified by the UNFCCC Clean Development Mechanism Executive Board, or Emission Reduction Unit (ERU) certified by the Joint Implementation project's host country or by the Joint Implementation Supervisory Committee, are accepted by the EU as equivalent. Thus one EU Allowance Unit of one tonne of CO2, or "EUA", was designed to be identical ("fungible") with the equivalent "assigned amount units" (AAU) of CO2 defined under Kyoto. Hence, because of the EU decision to accept Kyoto-CERs as equivalent to EU-EUAs, it is possible to trade EUAs and UNFCCC-validated CERs on a one-to-one basis within the same system. (However, the EU was not able to link trades from all its countries until 2008-9 because of its technical problems connecting to the UN systems).During Phase II of the EU ETS, the operators within each Member State must surrender their allowances for inspection by the EU before they can be "retired" by the UNFCCC. Allocation The total number of permits issued (either auctioned or allocated) determines the supply for the allowances. The actual price is determined by the market. Too many allowances compared to demand will result in a low carbon price, and reduced emission abatement efforts. Too few allowances will result in a high carbon price.For each EU ETS Phase, the total quantity to be allocated by each Member State is defined in the National Allocation Plan (equivalent to its UNFCCC-defined carbon account). The European Commission has oversight of the NAP process and decides if the NAP fulfills the twelve criteria set out in the Annex III of the Emission Trading Directive (EU Directive 2003/87/EC). The first and foremost criterion is that the proposed total quantity is in line with a Member State's Kyoto target. Of course, the Member State's plan can, and should, also take account of emission levels in other sectors not covered by the EU ETS, and address these within its own domestic policies. For instance, transport is responsible for 21% of EU greenhouse gas emissions, households, and small businesses for 17% and agriculture for 10%.During Phase I, most allowances in all countries were given freely (known as grandfathering). This approach has been criticized as giving rise to windfall profits, being less efficient than auctioning, and providing too little incentive for innovative new competition to provide clean, renewable energy. On the other hand, allocation rather than auctioning may be justified for a few sectors that face international competition like the aluminium and steel industries.To address these problems, the European Commission proposed various changes in a January 2008 package, including the abolishment of NAPs from 2013 and auctioning a far greater share (ca. 60% in 2013, growing afterward) of emission permits. From the start of Phase III (January 2013) there will be a centralized allocation of permits, not National Allocation Plans, with a greater share of auctioning of permits. Competitiveness Allocation can act as a means of addressing concerns over loss of competitiveness, and possible "leakage" (carbon leakage) of emissions outside the EU. Leakage is the effect of emissions increasing in countries or sectors that have weaker regulation of emissions than the regulation in another country or sector. Such concerns affect the following sectors: cement, steel, aluminium, pulp and paper, basic inorganic chemicals and fertilisers/ammonia. Leakage from these sectors was thought to be under 1% of total EU emissions. Correcting for leakage by allocating permits acts as a temporary subsidy for affected industries, but does not fix the underlying problem. Border adjustments would be the economically efficient choice, where imports are taxed according to their carbon content. One problem with border adjustments is that they might be used as a disguise for trade protectionism. Some adjustments may also not prevent emissions leakage. Banking and borrowing Within a certain trading period, banking and borrowing is allowed. For example, a 2006 EUA can be used in 2007 (banking) or in 2005 (borrowing). Interperiod borrowing is not allowed. Member states had the discretion to decide whether banking EUAs from Phase I to Phase II was allowed. Members The EU ETS operates in 30 countries: the 27 EU member states plus Iceland, Liechtenstein and Norway.The United Kingdom left the EU on 31 January 2020 but remained subject to EU rules until 31 December 2020. The UK Emissions Trading Scheme (UK ETS) replaced the UK's participation in the EU ETS on 1 January 2021, but the UK government required organisations to continue to comply with their existing obligations under the 2020 scheme year, which ended on 30 April 2021. Linking The EU ETS is linked to the Swiss Emissions Trading System since 1 January 2020. Linking systems creates a larger carbon market, which can reduce overall compliance costs, increase market liquidity and generate a more stable carbon market. Linking systems can also be politically symbolic as it shows willingness to undertake a common effort to reduce GHG emissions. Some scholars have argued that linking may provide a starting point for developing a new, bottom-up international climate policy architecture whereby multiple unique systems successively link their various systems. Phase I 2005–2007 In the first phase (2005–2007), the EU ETS included some 12,000 installations, representing approximately 40% of EU CO2 emissions, covering energy activities (combustion installations with a rated thermal input exceeding 20 MW, mineral oil refineries, coke ovens), production and processing of ferrous metals, mineral industry (cement clinker, glass and ceramic bricks) and pulp, paper and board activities. Launch and operation The ETS, in which all 15 Member States that were then members of the European Union participated, nominally commenced operation on 1 January 2005, although national registries were unable to settle transactions for the first few months. However, the prior existence of the UK Emissions Trading Scheme meant that market participants were already in place and ready. In its first year, 362 million tonnes of CO2 were traded on the market for a sum of €7.2 billion, and a large number of futures and options. Prices The price of allowances increased more or less steadily to a peak level in April 2006 of about €30 per tonne CO2. In late April 2006, a number of EU countries (the Netherlands, the Czech Republic, Belgium, France, and Spain) announced that their verified (or actual) emissions were less than the number of allowances allocated to installations. The spot price for EU allowances dropped 54% from €29.20 to €13.35 in the last week of April 2006. In May 2006, the European Commission confirmed that verified CO2 emissions were about 80 million tonnes or 4% lower than the number of allowances distributed to installations for 2005 emissions. In May 2006, prices fell to under €10/tonne. Lack of scarcity under the first phase of the system continued through 2006 resulting in a trading price of €1.2 per tonne in March 2007, declining to €0.10 in September 2007. In 2007, carbon prices for the trial phase dropped to near zero for most of the year. Meanwhile, prices for Phase II remained significantly higher throughout, reflecting the fact that allowances for the trial phase were set to expire by 31 December 2007. Verified emissions Verified emissions showed a net increase over the first phase of the scheme. For the countries for which data was available, emissions increased by 1.9% between 2005 and 2007 (at the time all 27 member states minus Romania, Bulgaria, and Malta). Consequently, observers accused national governments of abusing the system under industry pressure, and urged far stricter caps in the second phase (2008–2012). This led to a stricter regime in the second phase. Phase II 2008–12 Scope The second phase (2008–12) expanded the scope of the scheme significantly. In 2007, three non-EU members, Norway, Iceland, and Liechtenstein joined the scheme. The EU's "Linking Directive" introduced the CDM and JI credits. Although this was a theoretical possibility in phase I, the over-allocation of permits combined with the inability to bank them for use in the second phase meant it was not taken up.During Phases I and II, allowances for emissions have typically been given free to firms, which has resulted in them getting windfall profits. Ellerman and Buchner (2008) suggested that during its first two years in operation, the EU-ETS turned an expected increase in emissions of 1–2% per year into a small absolute decline. Grubb et al. (2009) suggested that a reasonable estimate for the emissions cut achieved during its first two years of operation was 50–100 MtCO2 per year, or 2.5–5%.On 27 April 2012, the European Commission announced the full activation of the EU Emissions Trading System single registry. The full activation process included the migration of over 30,000 EU ETS accounts from national registries. The European Commission further stated that the single registry to be activated in June will not contain all the required functionalities for phase III of the EU ETS.Phase II saw some tightening, but the use of JI and CDM offsets was allowed, with the result that no reductions in the EU will be required to meet the Phase II cap. For Phase II, the cap is expected to result in an emissions reduction in 2010 of about 2.4% compared to expected emissions without the cap (business-as-usual emissions). Aviation emissions Aviation emissions were to be included from 2012. The inclusion of aviation was considered important by the EU. The inclusion of aviation was estimated to increase in demand for allowances by about 10–12 million tonnes of CO2 per year in phase two. According to DEFRA, an increased use of JI credits from projects in Russia and Ukraine, would offset any increase in prices so there would be no discernible impact on average annual CO2 prices.The airline industry and other countries including China, India, Russia, and the United States reacted adversely to the inclusion of the aviation sector. The United States and other countries argued that the EU did not have jurisdiction to regulate flights when they were not in European skies; China and the United States threatened to ban their national carriers from complying with the scheme. On 27 November 2012 the United States enacted the European Union Emissions Trading Scheme Prohibition Act of 2011 which prohibits U.S. carriers from participating in the European Union Emission Trading Scheme. China threatened to withhold $60 billion in outstanding orders from Airbus, which in turn led to France pressuring the EU to freeze the scheme.The EU insisted that the regulation should be applied equally to all carriers, and that it did not contravene international regulations. In the absence of a global agreement on airline emissions, the EU argued that it was forced to go ahead with its own scheme. But only flights within the EEA are covered; international flights are not. Other Ultimately, the Commission intended that the third trading period should cover all greenhouse gases and all sectors, including aviation, maritime transport, and forestry. For the transport sector, the large number of individual users adds complexities, but might be implemented either as a cap-and-trade system for fuel suppliers or a baseline-and-credit system for car manufacturers.The National Allocation Plans for Phase II, the first of which were announced on 29 November 2006, provided for an average reduction of nearly 7% below the 2005 emission levels. However, the use of offsets such as Emission Reduction Units from JI and Certified Emission Reductions from CDM projects was allowed, with the result that the EU would be able to meet the Phase II cap by importing units instead of reducing emissions (CCC, 2008, pp. 145, 149).According to verified EU data from 2008, the ETS resulted in an emissions reduction of 3%, or 50 million tons. At least 80 million tons of "carbon offsets" were bought for compliance with the scheme.In late 2006, European Commission started infringement proceedings against Austria, Czech Republic, Denmark, Hungary, Italy and Spain, for failure to submit their proposed National Allocation Plans on time.In July 2020, The Environment Committee of the European Parliament voted to include CO2 emissions from the maritime sector in the European Union (EU) Emissions Trading System (ETS), starting in January 2024, with ships over 5,000 GT. State allocation plans The annual Member State CO2 yearly allowances in million tonnes are shown in the table: Carbon price The carbon price within Phase II increased to over €20/tCO2 in the first half of 2008 (CCC, 2008, p. 149). The average price was €22/tCO2 in the second half of 2008, and €13/tCO2 in the first half of 2009. CCC (2009, p. 67) gave two reasons for this fall in prices: Reduced output in energy-intensive sectors as a result of the recession. This means that less abatement will be required to meet the cap, lowering the carbon price. The market perception of future fossil fuel prices may have been revised downwards.Projections made in 2009 indicate that like Phase I, Phase II would see a surplus in allowances and that 2009 carbon prices were being sustained by the need to "bank" allowances to surrender them in the tougher third phase. In December 2009, carbon prices dropped to a six-month low after the Copenhagen climate summit outcome disappointed traders. Prices for EU allowances for December 2010 delivery dropped 8.7% to 12.40 euros a tonne.In March 2012, according to the periodical Economist, the EUA permit price under the EU ETS had "tanked" and was too low to provide incentives for firms to reduce emissions. The permit price had been persistently under €10 per tonne compared to nearly €30 per tonne in 2008. The market had been oversupplied with permits. In June 2012, EU allowances for delivery in December 2012 traded at 6.76 euros each on the Intercontinental Exchange Futures Europe exchange, a 61 percent decline compared with a year previously.In July 2012, Thomson Reuters Point Carbon stated that it considered that without intervention to reduce the supply of allowances, the price of allowances would fall to four Euros. The 2012 closing price for an EU allowance with a December 2013 contract ended the year at 6.67 euros a tonne. In late January 2013, the EU allowance price fell to a new record low of 2.81 euros after the energy and industry committee of the European parliament opposed a proposal to withhold 900 million future-dated allowances from the market. Phase III 2013–2020 For Phase III (2013–2020), the European Commission implemented a number of changes, including (CCC, 2008, p. 149): the setting of an overall EU cap, with allowances then allocated to EU members; tighter limits on the use of offsets; limiting banking of allowances between Phases II and III; a move from allowances to auctioning; and the inclusion of more sectors and gases.Also, millions of allowances set aside in the New Entrants Reserve (NER) to fund the deployment of innovative renewable energy technologies and carbon capture and storage through the NER 300 programme, one of the world's largest funding programmes for innovative low-carbon energy demonstration projects. The programme is conceived as a catalyst for the demonstration of environmentally safe carbon capture and storage (CCS) and innovative renewable energy (RES) technologies on a commercial scale within the European Union.Ahead of its accession to the EU, Croatia joined the ETS at the start of Phase III on 1 January 2013. This took the number of countries in the EU ETS to 31. On 4 January 2013, European Union allowances for 2013 traded on London's ICE Futures Europe exchange for between 6.22 euros and 6.40 euros.The number of excess allowances carried over ("banked") from Phase II to Phase III was 1.7 billion. Phase IV 2021–2030 Phase IV commenced on 1 January 2021 and will finish on 31 December 2030. The European Commission plans a full review of the Directive by 2026. Since 2018, prices have continuously increased, reaching €57/tCO2 (67 $) in July 2021. This results in additional costs of about €0.04/kWh for coal and €0.02/kWh for gas combustion for electricity. Reform of the EU-ETS and introduction of the Market Stability Reserve (MSR) On 22 January 2014, the European Commission proposed two structural reform amendments to the ETS directive (2003/87/EC) of the 2008 Climate Package to be agreed on in the Council Conclusions on 20–21 March 2014 by the Heads of EU Member States at the meeting of the European Council: the linear reduction factor, at which the overall emissions cap is reduced, from 1.74% (2013–2020) to 2.2% each year from 2021 to 2030, thus reducing EU CO2 emissions in the ETS sector by 43% compared to 2005 the creation of a 12% "automatic set-aside" reserve mechanism of verified annual emissions (at least a 100 million CO2 permit reserve) in the fourth ETS period from 2021 to 2030, thus creating a quasi carbon tax or "carbon price floor" with a price range set each year by the European Commission's Directorate General for Climate ChangeConnie Hedegaard, the EU Commissioner for Climate Change, hoped "to link up the ETS with compatible systems around the world to form the backbone of a global carbon market" with Australia cited as an example. However, as the COP 19 Climate Conference again ended with no binding new international agreement in 2013, and after the election of the Liberal-National government, Australia dismantled its ETS system.Before the European Council summit on 20 March 2014, the European Commission decided to propose a change in the functioning of the carbon market (CO2 permits). The submitted legislation on the Market Stability Reserve system (MSR) would change the amount of annually auctioned CO2 permits based on the amount of CO2 permits in circulation. On 24 October 2014, at the meeting of the European Council, the Heads of Governments of EU Member States provided legal certainty to the proposed Market Stability Reserve (MSR) by sanctioning the political project in the text of the Council Conclusions. This would address imbalances in supply and demand in the European carbon market by adjusting volumes for auction. The reserve would operate on predefined rules with no discretion for the commission or Member States. The European Parliament and the European council informally agreed on an adapted version of this proposal, which sets the starting date of the MSR to 2019 (so already in Phase III), puts the 900 million backloaded allowances in the reserve and reduces the reaction time of the MSR to one year. The adopted proposal was passed as Decision (EU) 2015/1814 by the European parliament and the Council of ministers in 2015. Reform of the Market Stability Reserve (MSR) In the years 2014–2017, the back-loading of auction volumes and the legislation on introducing the MSR had neither substantially decreased the surplus of allowances nor substantially increased allowance prices in the EU-ETS, with EUA prices remaining below €10/tCO2. In 2018, the MSR was reformed again with Directive (EU) 2018/410, primarily to reduce the surplus of emissions allowances and create additional scarcity: For the period from 2019 to 2023, the share of allowances put into the MSR was increased from 12% to 24% From 2023 onwards, all allowances in the MSR above the total number of allowances auctioned during the previous year would become invalid Unilateral invalidation of allowances by member states that take additional policy measures leading to reduced demand for EUAs.This reform lead to a strong increase of EUA prices in 2018, with prices staying mostly in a range of €18-30/tCO2 from August 2018 to March 2020. "Fit for 55" package The change in the overall EU emissions target to –55% reduction versus 1990 in the European Green Deal necessitated tightening of the EU ETS reduction target for 2030 of –43% with respect to 2005. The EU commission proposed in its "Fit for 55" package to increase the EU ETS reduction target for 2030 to –61% compared to 2005. Such a tighter EU ETS target could increase scarcity of EUAs and thus raise EUA prices higher, with modeling studies estimating carbon prices in the range of €90-€130/tCO2 for 2030.The EU commission also proposed to include emissions from maritime transport in the EU ETS. Russian invasion of Ukraine 2022 The 24 February 2022 invasion sent carbon prices plunging from €97 in early February down to below €70. Costs Emissions in the EU have been reduced at costs that are significantly lower than projected, though transaction costs are related to economies of scale and can be significant for smaller installations. Overall, the estimated cost was a fraction of 1% of GDP. It was suggested that if permits were auctioned, and the revenues used effectively, e.g., to reduce distortionary taxes and fund low-carbon technologies, costs could be eliminated, or even create a positive economic impact. Overall emission reductions According to the European Commission, greenhouse gas emissions from big emitters covered by the EU ETS had decreased by an average of more than 17,000 tonnes per installation between 2005 and 2010, a decrease of more than 8%.A 2020 study found that the European Union Emissions Trading System successfully reduced CO2 emissions even though the prices for carbon were set at low prices.A 2023 study on the effects of the EU ETS identified a reduction in carbon emissions in the order of -10% between 2005 and 2012. The study compared regulated and unregulated companies, concluding that the EU ETS had no significant impact on profits and employment and led to an increase in revenues and fixed assets for regulated companies. Inclusion of sinks Currently, the EU does not allow CO2 credits under ETS to be obtained from sinks (e.g. reducing CO2 by planting trees). However, some governments and industry representatives lobby for their inclusion. The inclusion is currently opposed by NGOs as well as the EU commission itself, arguing that sinks are surrounded by too many scientific uncertainties over their permanence and that they have inferior long-term contribution to climate change compared to reducing emissions from industrial sources. ETS related crime Cybercrime On 19 January 2011, the EU emissions spot market for pollution permits was closed after computer hackers stole 28 to 30 million euros ($41.12 million) worth of emissions allowances from the national registries of several European countries within a few days time period. The Czech Registry for Emissions Trading was especially hard hit with 7 million euros worth of allowances stolen by hackers from Austria, the Czech Republic, Greece, Estonia, and Poland. A phishing scam is suspected to have enabled hackers to log into unsuspecting companies' carbon credit accounts and transfer the allowances to themselves, allowing them to then be sold.The European Commission said it would "proceed to determine together with national authorities what minimum security measures need to be put in place before the suspension of a registry can be lifted". Maria Kokkonen, EC spokeswoman for climate issues, said that national registries can be reopened once sufficient security measures have been enacted and member countries submit to the EC a report of their IT security protocol. The Czech registry said there are still legal and administrative hurdles to be overcome and Jiri Stastny, chairman of OTE AS, the Czech registry operator, said that until there is recourse for victims of such theft, and a system is in place to return allowances to their rightful owners, the Czech registry will remain closed. Registry officials in Germany and Estonia have confirmed they have located 610,000 allowances stolen from the Czech registry, according to Mr. Stastny. Another 500,000 of the stolen Czech allowances are thought to be in accounts in the UK, according to the OTE.Cyber fraudsters have also attacked the EU ETS with a "phishing" scam which cost one company €1.5 million. In response to this, the EU has revised the ETS rules to combat crime.The security breaches raised fears among some traders that they might have unknowingly purchased stolen allowances which they might later have to forfeit. The ETS experienced a previous phishing scam in 2010 which caused 13 European markets to shut down, and criminals cleared 5 million euros in another cross-border fraud in 2008 and 2009. VAT fraud In 2009 Europol informed that 90% market volume of emissions traded in some countries could be result of tax fraud, more specifically missing trader fraud, costing governments more than 5 billion euros.German prosecutors confirmed in March 2011 that value-added-tax fraud in the trade of carbon-dioxide emissions has deprived the German state of about €850 million ($1.19 billion). In December 2011 a German court sentenced six people to jail terms of between three years and seven years and 10 months in a trial involving evasion of taxes on carbon permits. A French court sentenced five people to one to five years in jail, and to pay massive fines for evading tax through carbon trading. In the UK a first trial over VAT fraud in the carbon market is put on track to start in February 2012. Views on the EU ETS People and organizations responded differently to the EU ETS. Mr. Anne Theo Seinen, of the EC's Directorate-General for the Environment, described Phase I as a "learning phase", where, for example, the infrastructure and institutions for the ETS were set up (UK Parliament, 2009). In his view, the carbon price in Phase I had resulted in some abatement. Seinen also commented that the EU ETS needed to be supported by other policies for technology and renewable energy. According to CCC (2008, p. 155), technology policy is necessary to overcome market failures associated with delivering low-carbon technologies, e.g., by supporting research and development.In 2009 the World Wildlife Fund commented that there was no indication that the EU ETS had influenced longer-term investment decisions. In their view, the Phase III scheme brought about significant improvements, but still suffered from major weaknesses. Jones et al. (2008, p. 24) suggested that the EU ETS needed further reform to achieve its potential.A 2016 survey of German companies participating in the EU ETS found that under current trading conditions, the EU ETS has generated weak incentives for participating firms to adopt carbon abatement measures. Criticisms The EU ETS has been criticized for several points, including: over-allocation, windfall profits, price volatility, and in general for failing to meet its goals. Proponents maintain, however, that Phase I of the EU ETS (2005–2007) was a "learning phase" designed primarily to establish baselines and create the infrastructure for a carbon market, not to achieve significant reductions.A number of design flaws have limited the effectiveness of the scheme. In the initial 2005–07 period, emission caps were not tight enough to drive a significant reduction in emissions. The total allocation of allowances turned out to exceed actual emissions. This drove the carbon price down to zero in 2007. This oversupply was caused because the allocation of allowances by the EU was based on emissions data from the European Environmental Agency in Copenhagen, which uses a horizontal activity-based emissions definition similar to the United Nations, the EU-ETS Transaction log in Brussels, but a vertical installation-based emissions measurement system. This caused an oversupply of 200 million tonnes (10% of market) in the EU-ETS in the first phase and collapsing prices.In addition, the EU ETS has been criticized as having caused a disruptive spike in energy prices. Defenders of the scheme say that this spike did not correlate with the price of permits, and in fact the largest price increase occurred at a time (Mar–Dec 2007) when the cost of permits was negligible.Researchers Preston Teeter and Jorgen Sandberg have argued that it is largely the uncertainty behind the EU's scheme that has resulted in such a tepid and informal response by regulated organizations. Their research has revealed a similar outcome in Australia, where organizations saw little incentive to innovate and even comply with cap and trade regulations.Some critics in EU blamed the EU ETS for contributing to the 2021 global energy crisis. Over-allocation There was an oversupply of emissions allowances for EU ETS Phase I. This drove the carbon price down to zero in 2007 (CCC, 2008, p. 140). This oversupply reflects the difficulty in predicting future emissions which is necessary in setting a cap. Given poor data about emissions baselines, inherent uncertainty of emissions forecasts, and the very modest reduction goals of the Phase I cap (1–2% across the EU), it was entirely expected that the cap might be set too high.This problem naturally diminishes as the cap tightens. The EU's Phase II cap is more than 6% below 2005 levels, much stronger than Phase I, and readily distinguishable from business-as-usual emissions levels.Over-allocation does not imply that no abatement occurred. Even with over-allocation, there was theoretically a price on carbon (except for installations that received hundreds of thousands of free allowances). For some installations, the price had some effect on emitters' behavior. Verified emissions in 2005 were 3–4% below projected emissions, and analysis suggests that at least part of that reduction was due to the EU ETS.In September 2012, Thomson Reuters Point Carbon calculated that the first Kyoto Protocol commitment period had been oversupplied by about 13 billion tonnes (13.1 Gt) of CO2 and that the second commitment period (2013–2020) was likely to start with a surplus of Assigned Amount Units (AAUs). Windfall profits According to Newbery (2009), the price of EUAs was included in the final price of electricity. The free allocation of permits was cashed in at the EUA price by fossil generators, resulting in a "massive windfall gain". Newbery (2009) wrote that "[there] is no case for repeating such a willful misuse of the value of a common property resource that the country should own". In the view of 4CMR (2009), all permits in the EU ETS should be auctioned. This would avoid possible windfall profits in all sectors. Price volatility The price of emissions permits tripled in the first six months of Phase I, collapsed by half in a one-week period in 2006, and declined to zero over the next twelve months. Such movements and the implied volatility raised questions about the viability of the Phase I system to provide stable incentives to emitters.In future phases, measures such as banking of allowances, auctioning, and price floors were considered to mitigate volatility. However, it's important to note that considerable volatility is expected of this type of market, and the volatility seen is quite in line with that of energy commodities generally. Nonetheless, producers and consumers in those markets respond rationally and effectively to price signals.Newbery (2009) commented that Phase I of the EU ETS was not delivering the stable carbon price necessary for long-term, low-carbon investment decisions. He suggested that efforts should be made to stabilize carbon prices, e.g., by having a price ceiling and a price floor. This led to the reforms outlined above in Phases II and III. Offsetting Project based offsetting The EU ETS is "linked" to the Joint Implementation and Clean Development Mechanism projects as it allows the limited use of "offset credits" from them. Participating firms were allowed to use some Certified Emission Reduction units (CERs) from 2005 and Emission Reduction Units (ERUs) from 2008. Each Member State's National Allocation Plan must specify a percentage of the national allocation that will be the cap on the CERs and ERUs that may be used. CERs and ERUs from nuclear facilities and from Land Use, Land-Use Change and Forestry may not be used.The main theoretical advantage of allowing free trading of credits is that it allows mitigation to be done at least-cost (CCC, 2008, p. 160). This is because the marginal costs (that is to say, the incremental costs of preventing the emission of one extra ton of CO2e into the atmosphere) of abatement differs among countries. In terms of the UK's climate change policy, CCC (2008), noted three arguments against too great a reliance on credits: Rich countries need to demonstrate that a low-carbon economy is possible and compatible with economic prosperity. This is to convince developing countries to lower their emissions. Additionally, domestic action by rich countries drives investment towards a low-carbon economy. An ambitious long-term target to reduce emissions, e.g., an 80% cut in UK emissions by 2050, requires significant domestic progress by 2020 and 2030 to reduce emissions. CDM credits are inherently less robust than a cap and trade system, where reductions are required in total emissions.Due to the economic downturn, states have pushed successfully for a more generous approach towards the use of CDM/JI credits post-2012. The 2009 EU ETS Amending Directive states that credits can be used for up to 50% of the EU-wide reductions below the 2005 levels of existing sectors over the period 2008–2020. Moreover, it has been argued that the volume of CDM/JI credits, if carried over from phase II (2008–2012 to phase III 2013–2020) in the EU ETS will undermine its environmental effectiveness, despite the requirement of supplementarity in the Kyoto Protocol.In January 2011, the EU Climate Change Committee banned the use of CDM Certified Emission Reduction units from HFC-23 destruction in the European Union Emissions Trading Scheme from 1 May 2013. The ban includes nitrous oxide (N2O) from adipic acid production. The reasons given were the perverse incentives, the lack of additionality, the lack of environmental integrity, the under-mining of the Montreal Protocol, costs and ineffectiveness and the distorting effect of a few projects in advanced developing countries getting too many CERs. Buying and deleting emissions allowances As an alternative to CDM and JI projects, emissions can be offset directly by buying and deleting emissions allowances inside the ETS. This is a way to avoid several problems of CDM and JI such as additionality, measurement, leakage, permanence, and verification. Buying and cancelling allowances allows to include more emissions sources in the ETS (such as traffic). Furthermore, it reduces the available allowances in the cap-and-trade system, which means that it reduces the emissions that can be produced by covered sources. See also Carbon emission trading Carbon finance Carbon tax Energy policy of the European Union European Climate Change Programme ICAP (International Carbon Action Partnership) Mitigation of global warming Single European Sky References External links Official pages European Commission, "Emissions Trading System (EU ETS)" "Directive 2003/87/EC of the European Parliament and of the Council of 13 October 2003", Official Journal of the European Union – EU Directive establishing EU ETSHow ETS works UK Defra General overview at the UK Department for Environment, Food and Rural Affairs Pew Center White Paper: overview of EU ETS Emission Trading Fact Book of Inagendo (contains, among others, a glossary of ETS terms) Video from Climate and Pollution Agency (Norway): The Emission Trading SchemeKey reports, and assessments Prospects for the EU Emissions Trading System, Library of the European Parliament, June 2012 National Allocation Plans 2005–7: Do they deliver? Executive summary of report by Climate Action Network. Carbon Trade Watch WWF website "The environmental effectiveness and economic efficiency of the EU ETS: Structural aspects of the allocation". by WWF and Öko-Institut, 9 November 2005. The European Emission Trading Scheme Put to the Test of State Aid Rules Scarcity and Allocation of Allowances in the EU Emissions Trading Scheme – A Legal Analysis.Case law Swiss International Air Lines AG v UK SoS for Energy and Climate Change [2015] EWCA Civ 331
methane emissions
Increasing methane emissions are a major contributor to the rising concentration of greenhouse gases in Earth's atmosphere, and are responsible for up to one-third of near-term global heating. During 2019, about 60% (360 million tons) of methane released globally was from human activities, while natural sources contributed about 40% (230 million tons). Reducing methane emissions by capturing and utilizing the gas can produce simultaneous environmental and economic benefits.Since the Industrial Revolution, concentrations of methane in the atmosphere have more than doubled, and about 20 percent of the warming the planet has experienced can be attributed to the gas. About one-third (33%) of anthropogenic emissions are from gas release during the extraction and delivery of fossil fuels; mostly due to gas venting and gas leaks from both active fossil fuel infrastructure and orphan wells. Russia is the world's top methane emitter from oil and gas.Animal agriculture is a similarly large source (30%); primarily because of enteric fermentation by ruminant livestock such as cattle and sheep. According to the Global Methane Assessment published in 2021, methane emissions from livestock (including cattle) are the largest sources of agricultural emissions worldwide A single cow can make up to 99 kg of methane gas per year. Ruminant livestock can produce 250 to 500 L of methane per day.Human consumer waste flows, especially those passing through landfills and wastewater treatment, have grown to become a third major category (18%). Plant agriculture, including both food and biomass production, constitutes a fourth group (15%), with rice production being the largest single contributor.The world's wetlands contribute about three-quarters (75%) of the enduring natural sources of methane. Seepages from near-surface hydrocarbon and clathrate hydrate deposits, volcanic releases, wildfires, and termite emissions account for much of the remainder. Contributions from the surviving wild populations of ruminant mammals are vastly overwhelmed by those of cattle, humans, and other livestock animals.The Economist recommended setting methane emissions targets as a reduction in methane emissions would allow for more time to tackle the more challenging carbon emissions". Atmospheric concentration and warming influence The atmospheric methane (CH4) concentration is increasing and exceeded 1860 parts per billion in 2019, equal to two-and-a-half times the pre-industrial level. The methane itself causes direct radiative forcing that is second only to that of carbon dioxide (CO2). Due to interactions with oxygen compounds stimulated by sunlight, CH4 can also increase the atmospheric presence of shorter-lived ozone and water vapour, themselves potent warming gases: atmospheric researchers call this amplification of methane's near-term warming influence indirect radiative forcing. When such interactions occur, longer-lived and less-potent CO2 is also produced. Including both the direct and indirect forcings, the increase in atmospheric methane is responsible for about one-third of near-term global heating.Though methane causes far more heat to be trapped than the same mass of carbon dioxide, less than half of the emitted CH4 remains in the atmosphere after a decade. On average, carbon dioxide warms for much longer, assuming no change in rates of carbon sequestration. The global warming potential (GWP) is a way of comparing the warming due to other gases to that from carbon dioxide, over a given time period. Methane's GWP20 of 85 means that a ton of CH4 emitted into the atmosphere creates approximately 85 times the atmospheric warming as a ton of CO2 over a period of 20 years. On a 100-year timescale, methane's GWP100 is in the range of 28–34. Methane emissions are important as reducing them can buy time to tackle carbon emissions. Overview of emission sources Biogenic methane is actively produced by microorganisms in a process called methanogenesis. Under certain conditions, the process mix responsible for a sample of methane may be deduced from the ratio of the isotopes of carbon, and through analysis methods similar to carbon dating. Anthropogenic A comprehensive systems method from describing the sources of methane due to human society is known as anthropogenic metabolism. As of 2020, emission volumes from some sources remain more uncertain than others; due in part to localized emission spikes not captured by the limited global measurement capability. The time required for a methane emission to become well-mixed throughout earth's troposphere is about 1–2 years.Satellite data indicate over 80% of the growth of methane emissions during 2010–2019 are tropical terrestrial emissions.There is accumulating research and data showing that oil and gas industry methane emissions – or from fossil fuel extraction, distribution and use – are much larger than thought. Natural Natural sources have always been a part of the methane cycle. Wetland emissions have been declining due to draining for agricultural and building areas. Methanogenesis Most ecological emissions of methane relate directly to methanogens generating methane in warm, moist soils as well as in the digestive tracts of certain animals. Methanogens are methane producing microorganisms. In order to produce energy, they use an anaerobic process called methanogenesis. This process is used in lieu of aerobic, or with oxygen, processes because methanogens are unable to metabolise in the presence of even small concentrations of oxygen. When acetate is broken down in methanogenesis, the result is the release of methane into the surrounding environment. Methanogenesis, the scientific term for methane production, occurs primarily in anaerobic conditions because of the lack of availability of other oxidants. In these conditions, microscopic organisms called archaea use acetate and hydrogen to break down essential resources in a process called fermentation. Acetoclastic methanogenesis – certain archaea cleave acetate produced during anaerobic fermentation to yield methane and carbon dioxide. H3C-COOH → CH4 + CO2Hydrogenotrophic methanogenesis – archaea oxidize hydrogen with carbon dioxide to yield methane and water. 4H2 + CO2 → CH4 + 2H2OWhile acetoclastic methanogenesis and hydrogenotrophic methanogenesis are the two major source reactions for atmospheric methane, other minor biological methane source reactions also occur. For example, it has been discovered that leaf surface wax exposed to UV radiation in the presence of oxygen is an aerobic source of methane. Natural methane cycles Emissions of methane into the atmosphere are directly related to temperature and moisture. Thus, the natural environmental changes that occur during seasonal change act as a major control of methane emission. Additionally, even changes in temperature during the day can affect the amount of methane that is produced and consumed.Its concentration is higher in the Northern Hemisphere since most sources (both natural and human) are located on land and the Northern Hemisphere has more land mass. The concentrations vary seasonally, with, for example, a minimum in the northern tropics during April−May mainly due to removal by the hydroxyl radical.For example, plants that produce methane can emit as much as two to four times more methane during the day than during the night. This is directly related to the fact that plants tend to rely on solar energy to enact chemical processes. Additionally, methane emissions are affected by the level of water sources. Seasonal flooding during the spring and summer naturally increases the amount of methane released into the air. Plants The 2007 IPCC report said that living plants (e.g. forests) have recently been identified as a potentially important source of methane, possibly being responsible for approximately 10 to 30% of atmospheric methane. A 2006 paper calculated emissions of 62–236 Tg a−1, and "this newly identified source may have important implications". However the authors stress "our findings are preliminary with regard to the methane emission strength".These findings have been called into question in a 2007 paper which found "there is no evidence for substantial aerobic methane emission by terrestrial plants, maximally 0.3% of the previously published values".While the details of plant methane emissions have yet to be confirmed, plants as a significant methane source would help fill in the gaps of previous global methane budgets as well as explain large plumes of methane that have been observed over the tropics. Wetlands In wetlands, where the rate of methane production is high, plants help methane travel into the atmosphere—acting like inverted lightning rods as they direct the gas up through the soil and into the air. They are also suspected to produce methane themselves, but because the plants would have to use aerobic conditions to produce methane, the process itself is still unidentified, according to a 2014 Biogeochemistry article.A 1994 article on methane emissions from northern wetlands said that since the 1800s, atmospheric methane concentrations increased annually at a rate of about 0.9%. Human-caused methane emissions The AR6 of the IPCC said, "It is unequivocal that the increases in atmospheric carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) since the pre-industrial period are overwhelmingly caused by human activities." Atmospheric methane accounted for 20% of the total radiative forcing (RF) from all of the long-lived and globally mixed greenhouse gases. According to the 2021 assessment by the Climate and Clean Air Coalition (CCAC) and the United Nations Environment Programme (UNEP) over 50% of global methane emissions are caused by human activities in fossil fuels (35%), waste (20%), and agriculture (40%). The oil and gas industry accounts for 23%, and coal mining for 12%. Twenty percent of global anthropogenic emissions stem from landfills and wastewater. Manure and enteric fermentation represent 32%, and rice cultivation represents 8%.The most clearly identified rise in atmospheric methane as a result of human activity occurred in the 1700s during the industrial revolution. During the 20th century—mainly because of the use of fossil fuels—concentration of methane in the atmosphere increased, then stabilized briefly in the 1990s, only to begin to increase again in 2007. After 2014, the increase accelerated and by 2017, reached 1,850 (parts per billion) ppb.Increases in methane levels due to modern human activities arise from a number of specific sources including industrial activity; from extraction of oil and natural gas from underground reserves; transportation via pipeline of oil and natural gas; and melting permafrost in Arctic regions, due to global warming which is caused by human use of fossil fuels. The primary component of natural gas is methane, which is emitted to the atmosphere in every stage of natural gas "production, processing, storage, transmission, and distribution". Methane gas from methane clathrates At high pressures, such as are found on the bottom of the ocean, methane forms a solid clathrate with water, known as methane hydrate. An unknown, but possibly very large quantity of methane is trapped in this form in ocean sediments.Theories suggest that should global warming cause them to heat up sufficiently, all of this methane gas could again be released into the atmosphere. Since methane gas is twenty-five times stronger (for a given weight, averaged over 100 years) than CO2 as a greenhouse gas; this would immensely magnify the greenhouse effect. The 2021 IPCC Sixth Assessment Report (AR6) Working Group 1 report said that it was "very unlikely that gas clathrates (mostly methane) in deeper terrestrial permafrost and subsea clathrates will lead to a detectable departure from the emissions trajectory during this century".: 5 Aquatic ecosystems Natural and anthropogenic methane emissions from aquatic ecosystems are estimated to contribute about half of total global emissions. Urbanization and eutrophication are expected to lead to increased methane emissions from aquatic ecosystems. Permafrost Permafrost contains almost twice as much carbon as the atmosphere, with ~20 Gt of permafrost-associated methane trapped in methane clathrates. Permafrost thaw results in the formation of thermokarst lakes in ice-rich yedoma deposits. Methane frozen in permafrost is slowly released as permafrost melts. Radiocarbon dating of trace methane in lake bubbles and soil organic carbon concluded that 0.2 to 2.5 Pg of permafrost carbon has been released as methane and carbon dioxide over the last 60 years. The 2020 heat wave may have released significant methane from carbonate deposits in Siberian permafrost. Methane emissions by the 'permafrost carbon feedback' -- amplification of surface warming due to enhanced radiative forcing by carbon release from permafrost—could contribute an estimated 205 Gt of carbon emissions, leading up to 0.5 °C (0.9 °F) of additional warming by the end of the 21st century. However, recent research based on the carbon isotopic composition of atmospheric methane trapped in bubbles in Antarctic ice suggests that methane emissions from permafrost and methane hydrates were minor during the last deglaciation, suggesting that future permafrost methane emissions may be lower than previously estimated. Emissions due to oil and gas extraction A 2005 Wuppertal Institute for Climate, Environment and Energy article identified pipelines that transport natural gas as a source of methane emissions. The article cited the example of Trans-Siberian natural gas pipeline system to western and Central Europe from the Yamburg and Urengoy exist gas fields in Russia with a methane concentration of 97%. In accordance with the IPCC and other natural gas emissions control groups, measurements had to be taken throughout the pipeline to measure methane emissions from technological discharges and leaks at the pipeline fittings and vents. Although the majority of the natural gas leaks were carbon dioxide, a significant amount of methane was also being consistently released from the pipeline as a result of leaks and breakdowns. In 2001, natural gas emissions from the pipeline and natural gas transportation system accounted for 1% of the natural gas produced. Between 2001 and 2005, this was reduced to 0.7%, the 2001 value was significantly less than that of 1996.A 2012 Climatic Change article and 2014 publication by a team of scientists led by Robert W. Howarth said that there was strong evidence that "shale gas has a larger GHG footprint than conventional gas, considered over any time scale. The GHG footprint of shale gas also exceeds that of oil or coal when considered at decadal time scales." Howarth called for policy changes to regulate methane emissions resulting from hydraulic fracturing and shale gas development.A 2013 study by a team of researchers led by Scot M. Miller, said that U.S. greenhouse gas reduction policies in 2013 were based on what appeared to be significant underestimates of anthropogenic methane emissions. The article said, that "greenhouse gas emissions from agriculture and fossil fuel extraction and processing"—oil and/or natural gas—were "likely a factor of two or greater than cited in existing studies." By 2001, following a detailed study anthropogenic sources on climate change, IPCC researchers found that there was "stronger evidence that most of the observed warming observed over the last 50 years [was] attributable to human activities." Since the Industrial Revolution humans have had a major impact on concentrations of atmospheric methane, increasing atmospheric concentrations roughly 250%. According to the 2021 IPCC report, 30 - 50% of the current rise in temperatures is caused by emissions of methane, and reducing methane is a fast way of climate change mitigation. An alliance of 107 countries, including Brazil, the EU and the US, have joined the pact known as the Global Methane Pledge, committing to a collective goal of reducing global methane emissions by at least 30% from 2020 levels by 2030. Animals and livestock Ruminant animals, particularly cows and sheep, contain bacteria in their gastrointestinal systems that help to break down plant material. Some of these microorganisms use the acetate from the plant material to produce methane, and because these bacteria live in the stomachs and intestines of ruminants, whenever the animal "burps" or defecates, it emits methane as well. Based upon a 2012 study in the Snowy Mountains region, the amount of methane emitted by one cow is equivalent to the amount of methane that around 3.4 hectares of methanotrophic bacteria can consume.: 103  research in the Snowy Mountains region of Australia showed 8 tonnes of methane oxidized by methanotrophic bacteria per year on a 1,000 hectare farm. 200 cows on the same farm emitted 5.4 tonnes of methane per year. Hence, one cow emitted 27 kg of methane per year, while the bacteria oxidized 8 kg per hectare. The emissions of one cow were oxidized by 27/8 ≈ 3.4 hectare. Termites also contain methanogenic microorganisms in their gut. However, some of these microorganisms are so unique that they live nowhere else in the world except in the third gut of termites. These microorganisms also break down biotic components to produce ethanol, as well as methane byproduct. However, unlike ruminants who lose 20% of the energy from the plants they eat, termites only lose 2% of their energy in the process. Thus comparatively, termites do not have to eat as much food as ruminants to obtain the same amount of energy, and give off proportionally less methane. In 2001, NASA researchers confirmed the vital role of enteric fermentation in livestock on global warming. A 2006 UN FAO report reported that livestock generate more greenhouse gases as measured in CO2 equivalents than the entire transportation sector. Livestock accounts for 9% of anthropogenic CO2, 65%t of anthropogenic nitrous oxide and 37% of anthropogenic methane. Since then, animal science and biotechnology researchers have focused research on methanogens in the rumen of livestock and mitigation of methane emissions.Nicholas Stern, the author of the 2006 Stern Review on climate change has stated "people will need to turn vegetarian if the world is to conquer climate change". In 2003, the National Academy of Sciences's president, Ralph Cicerone—an atmospheric scientist—raised concerns about the increase in the number of methane-producing dairy and beef cattle was a "serious topic" as methane was the "second-most-important greenhouse gas in the atmosphere".Approximately 5% of the methane is released via the flatus, whereas the other 95% is released via eructation. Vaccines are under development to reduce the amount introduced through eructation. Asparagopsis seaweed as a livestock feed additive has reduced methane emissions by more than 80%. Others Ecological conversion Conversion of forests and natural environments into agricultural plots increases the amount of nitrogen in the soil, which inhibits methane oxidation, weakening the ability of the methanotrophic bacteria in the soil to act as sinks. Additionally, by changing the level of the water table, humans can directly affect the soil's ability to act as a source or sink. The relationship between water table levels and methane emission is explained in the wetlands section of natural sources. Rice agriculture Due to a continuously growing world population, rice agriculture has become one of the most significant anthropogenic sources of methane. With warm weather and water-logged soil, rice paddies act like wetlands, but are generated by humans for the purpose of food production. Due to the swamp-like environment of rice fields, these paddies yield 50–100 million metric tons of methane emission each year. This means that rice agriculture is responsible for approximately 15 to 20% of anthropogenic methane emissions. Landfills Due to the large collections of organic matter and availability of anaerobic conditions, landfills are the third largest source of atmospheric methane in the United States, accounting for roughly 18.2% of methane emissions globally in 2014. When waste is first added to a landfill, oxygen is abundant and thus undergoes aerobic decomposition; during which time very little methane is produced. However, generally within a year oxygen levels are depleted and anaerobic conditions dominate the landfill allowing methanogens to takeover the decomposition process. These methanogens emit methane into the atmosphere and even after the landfill is closed, the mass amount of decaying matter allows the methanogens to continue producing methane for years. Waste water treatment Waste water treatment facilities act to remove organic matter, solids, pathogens, and chemical hazards as a result of human contamination. Methane emission in waste treatment facilities occurs as a result of anaerobic treatments of organic compounds and anaerobic biodegradation of sludge. Biomass burning Incomplete burning of both living and dead organic matter results in the emission of methane. While natural wildfires can contribute to methane emissions, the bulk majority of biomass burning occurs as a result of humans – including everything from accidental burnings by civilians to deliberate burnings used to clear out land to biomass burnings occurring as a result of destroying waste. Oil and natural gas supply chain Methane is a primary component of natural gas, and thus during the production, processing, storage, transmission, and distribution of natural gas, a significant amount of methane is lost into the atmosphere.According to the EPA Inventory of U.S Greenhouse Gas Emissions and Sinks: 1990–2015 report, 2015 methane emissions from natural gas and petroleum systems totaled 8.1 Tg per year in the United States. Individually, the EPA estimates that the natural gas system emitted 6.5 Tg per year of methane while petroleum systems emitted 1.6 Tg per year of methane. Methane emissions occur in all sectors of the natural gas industry, from drilling and production, through gathering and processing and transmission, to distribution. These emissions occur through normal operation, routine maintenance, fugitive leaks, system upsets, and venting of equipment. In the oil industry, some underground crude contains natural gas that is entrained in the oil at high reservoir pressures. When oil is removed from the reservoir, associated gas is produced. However, a review of methane emissions studies reveals that the EPA Inventory of Greenhouse Gas Emissions and Sinks: 1990–2015 report likely significantly underestimated 2015 methane emissions from the oil and natural gas supply chain. The review concluded that in 2015 the oil and natural gas supply chain emitted 13 Tg per year of methane, which is about 60% more than the EPA report for the same time period. The authors write that the most likely cause for the discrepancy is an under sampling by the EPA of so-called "abnormal operating conditions", during which large quantities of methane can be emitted. Methane slip from gas engines The use of natural gas and biogas in internal combustion engines for such applications as electricity production, cogeneration and heavy vehicles or marine vessels such as LNG carriers using the boil off gas for propulsion, emits a certain percentage of unburned hydrocarbons of which 85% is methane. The climate issues of using gas to fuel internal combustion engines may offset or even cancel out the advantages of less CO2 and particle emissions is described in this 2016 EU Issue Paper on methane slip from marine engines: "Emissions of unburnt methane (known as the 'methane slip') were around 7 g per kg LNG at higher engine loads, rising to 23–36 g at lower loads. This increase could be due to slow combustion at lower temperatures, which allows small quantities of gas to avoid the combustion process". Road vehicles run more on low load than marine engines causing relatively higher methane slip. Coal mining In 2014 NASA researchers reported the discovery of a 2,500 square miles (6,500 km2) methane cloud floating over the Four Corners region of the south-west United States. The discovery was based on data from the European Space Agency's Scanning Imaging Absorption Spectrometer for Atmospheric Chartography instrument from 2002 to 2012.The report concluded that "the source is likely from established gas, coal, and coalbed methane mining and processing." The region emitted 590,000 metric tons of methane every year between 2002 and 2012—almost 3.5 times the widely used estimates in the European Union's Emissions Database for Global Atmospheric Research. In 2019, the International Energy Agency (IEA) estimated that the methane emissions leaking from the world's coalmines are warming the global climate at the same rate as the shipping and aviation industries combined. Release of stored arctic methane due to global warming Global warming due to fossil fuel emissions has caused Arctic methane release, i.e. the release of methane from seas and soils in permafrost regions of the Arctic. Although in the long term, this is a natural process, methane release is being exacerbated and accelerated by global warming. This results in negative effects, as methane is itself a powerful greenhouse gas. The Arctic region is one of the many natural sources of the greenhouse gas methane. Global warming accelerates its release, due to both release of methane from existing stores, and from methanogenesis in rotting biomass. Large quantities of methane are stored in the Arctic in natural gas deposits, permafrost, and as undersea clathrates. Permafrost and clathrates degrade on warming, thus large releases of methane from these sources may arise as a result of global warming. Other sources of methane include submarine taliks, river transport, ice complex retreat, submarine permafrost and decaying gas hydrate deposits. Global methane emissions monitoring The Tropospheric Monitoring Instrument aboard the European Space Agency's Sentinel-5P spacecraft launched in October 2017 provides the most detailed methane emissions monitoring which is publicly available. It has a resolution of about 50 square kilometres.MethaneSAT is under development by the Environmental Defense Fund in partnership with researchers at Harvard University, to monitor methane emissions with an improved resolution of 1 kilometer. MethaneSAT is designed to monitor 50 major oil and gas facilities, and could also be used for monitoring of landfills and agriculture. It receives funding from Audacious Project (a collaboration of TED and the Gates Foundation), and is projected to launch as soon as 2024.Uncertainties in methane emissions, including so-called "super-emitter" fossil extractions and unexplained atmospheric fluctuations, highlight the need for improved monitoring at both regional and global scale. Satellites have recently begun to come online with capability to measure methane and other more powerful greenhouse gases with improving resolution.The Tropomi instrument on Sentinel-5 launched in 2017 by the European Space Agency can measure methane, sulphur dioxide, nitrogen dioxide, carbon monoxide, aerosol, and ozone concentrations in earth's troposphere at resolutions of several kilometers. In 2022, a study using data from the instrument monitoring large methane emissions worldwide was published; 1,200 large methane plumes were detected over oil and gas extraction sites. NASA's EMIT instrument also identified super-emitters.Japan's GOSAT-2 platform launched in 2018 provides similar capability.The Claire satellite launched in 2016 by the Canadian firm GHGSat uses data from Tropomi to home in on sources of methane emissions as small as 15 m2.Other satellites are planned that will increase the precision and frequency of methane measurements, as well as provide a greater ability to attribute emissions to terrestrial sources. These include MethaneSAT, expected to be launched in 2022, and CarbonMapper. Global maps combining satellite data to help identify and monitor major methane emission sources are being built.The International Methane Emissions Observatory was created by the UN. Quantifying the global methane budget In order to mitigate climate change, scientists have been focusing on quantifying the global methane CH4 budget as the concentration of methane continues to increase—it is now second after carbon dioxide in terms of climate forcing. Further understanding of atmospheric methane is necessary in "assessing realistic pathways" towards climate change mitigation. Various research groups give the following values for methane emissions: National reduction policies China implemented regulations requiring coal plants to either capture methane emissions or convert methane into CO2 in 2010. According to a Nature Communications paper published in January 2019, methane emissions instead increased 50 percent between 2000 and 2015.In March 2020, Exxon called for stricter methane regulations, which would include detection and repair of methane leaks, minimization of venting and releases of unburned methane, and reporting requirements for companies. However, in August 2020, the U.S. Environmental Protection Agency rescinded a prior tightening of methane emission rules for the U.S. oil and gas industry. Approaches to reduce emissions Natural gas industries About 40% of methane emissions from the fossil fuel industry could be "eliminated at no net cost for firms", according to the International Energy Agency (IEA) by using existing technologies. Forty percent represents 9% of all human methane emissions.To reduce emissions from the natural gas industries, the EPA developed the Natural Gas STAR Program, also known as Gas STAR.The Coalbed Methane Outreach Program (CMOP) helps and encourages the mining industry to find ways to use or sell methane that would otherwise be released from the coal mine into the atmosphere. Livestock In order to counteract the amount of methane that ruminants give off, a type of drug called monensin (marketed as rumensin) has been developed. This drug is classified as an ionophore, which is an antibiotic that is naturally produced by a harmless bacteria strain. This drug not only improves feed efficiency but also reduces the amount of methane gas emitted from the animal and its manure.In addition to medicine, specific manure management techniques have been developed to counteract emissions from livestock manure. Educational resources have begun to be provided for small farms. Management techniques include daily pickup and storage of manure in a completely closed off storage facility that will prevent runoff from making it into bodies of water. The manure can then be kept in storage until it is either reused for fertilizer or taken away and stored in an offsite compost. Nutrient levels of various animal manures are provided for optimal use as compost for gardens and agriculture. Crops and soils In order to reduce effects on methane oxidation in soil, several steps can be taken. Controlling the usage of nitrogen enhancing fertilizer and reducing the amount of nitrogen pollution into the air can both lower inhibition of methane oxidation. Additionally, using drier growing conditions for crops such as rice and selecting strains of crops that produce more food per unit area can reduce the amount of land with ideal conditions for methanogenesis. Careful selection of areas of land conversion (for example, plowing down forests to create agricultural fields) can also reduce the destruction of major areas of methane oxidation. Landfills To counteract methane emissions from landfills, on March 12, 1996, the EPA (Environmental Protection Agency) added the "Landfill Rule" to the Clean Air Act. This rule requires large landfills that have ever accepted municipal solid waste, have been used as of November 8, 1987, can hold at least 2.5 million metric tons of waste with a volume greater than 2.5 million cubic meters, and/or have nonmethane organic compound (NMOC) emissions of at least 50 metric tons per year to collect and combust emitted landfill gas. This set of requirements excludes 96% of the landfills in the USA. While the direct result of this is landfills reducing emission of non-methane compounds that form smog, the indirect result is reduction of methane emissions as well. In an attempt to absorb the methane that is already being produced from landfills, experiments in which nutrients were added to the soil to allow methanotrophs to thrive have been conducted. These nutrient supplemented landfills have been shown to act as a small scale methane sink, allowing the abundance of methanotrophs to sponge the methane from the air to use as energy, effectively reducing the landfill's emissions. See also China United Coalbed Methane Climate change feedback Greenhouse gas emissions Greenhouse Gases Observing Satellite-2 Global Methane Initiative Fugitive gas emissions Notes References External links "Main sources of methane emissions". What's Your Impact. 2014-03-14. Retrieved 2018-03-06. "Greenhouse Gas Emissions - Methane Emissions". EIA. 2011-03-31. Retrieved 2018-03-06.
greenhouse gas emissions from wetlands
Greenhouse gas emissions from wetlands of concern consist primarily of methane and nitrous oxide emissions. Wetlands are the largest natural source of atmospheric methane in the world, and are therefore a major area of concern with respect to climate change. Wetlands account for approximately 20 - 30% of atmospheric methane through emissions from soils and plants, and contribute an approximate average of 161 Tg of methane to the atmosphere per year.Wetlands are characterized by water-logged soils and distinctive communities of plant and animal species that have adapted to the constant presence of water. This high level of water saturation creates conditions conducive to methane production. Most methanogenesis, or methane production, occurs in oxygen-poor environments. Because the microbes that live in warm, moist environments consume oxygen more rapidly than it can diffuse in from the atmosphere, wetlands are the ideal anaerobic environments for fermentation as well as methanogen activity. However, levels of methanogenesis fluctuates due to the availability of oxygen, soil temperature, and the composition of the soil. A warmer, more anaerobic environment with soil rich in organic matter would allow for more efficient methanogenesis.Some wetlands are a significant source of methane emissions and some are also emitters of nitrous oxide. Nitrous oxide is a greenhouse gas with a global warming potential 300 times that of carbon dioxide and is the dominant ozone-depleting substance emitted in the 21st century. Wetlands can also act as a sink for greenhouse gases. Emissions by type of wetland Characteristics of wetland classes can assist to inform on magnitude of methane emissions. However, wetland classes have displayed high variability in methane emissions spatially and temporally.Wetlands are often classified by landscape position, vegetation, and hydrologic regime. Wetland classes include marshes, swamps, bogs, fens, peatlands, muskegs, prairie pothole (landform), and pocosins. Amounts Depending on their characteristics, some wetlands are a significant source of methane emissions and some are also emitters of nitrous oxide. Methane Wetlands account for approximately 20 - 30% of atmospheric methane through emissions from soils and plants. Nitrous oxide fluxes Nitrous oxide is a greenhouse gas with a global warming potential 300 times that of carbon dioxide and is the dominant ozone-depleting substance emitted in the 21st century. Excess nutrients mainly from anthropogenic sources have been shown to significantly increase the N2O fluxes from wetland soils through denitrification and nitrification processes (see table below). A study in the intertidal region of a New England salt marsh showed that excess levels of nutrients might increase N2O emissions rather than sequester them.Data on nitrous oxide fluxes from wetlands in the southern hemisphere are lacking, as are ecosystem-based studies including the role of dominant organisms that alter sediment biogeochemistry. Aquatic invertebrates produce ecologically-relevant nitrous oxide emissions due to ingestion of denitrifying bacteria that live within the subtidal sediment and water column and thus may also be influencing nitrous oxide production within some wetlands. aThe flux rates are shown as hourly rates per unit area. A positive flux implies flux from soil into air; a negative flux implies flux from air into the soil. Negative N2O fluxes are common and are caused by consumption by the soil. Pathways of methane emission Wetlands counteract the sinking action that normally occurs with soil because of the high water table. The level of the water table represents the boundary between anaerobic methane production and aerobic methane consumption. When the water table is low, the methane generated within the wetland soil has to come up through the soil and get past a deeper layer of methanotrophic bacteria, thereby reducing emission. Methane transport by vascular plants can bypass this aerobic layer, thus increasing emission.Once produced, methane can reach the atmosphere via three main pathways: molecular diffusion, transport through plant aerenchyma, and ebullition. Primary productivity fuels methane emissions both directly and indirectly because plants not only provide much of the carbon needed for methane producing processes in wetlands but can affect its transport as well. Fermentation Fermentation is a process used by certain kinds of microorganisms to break down essential nutrients. In a process called acetoclastic methanogenesis, microorganisms from the classification domain archaea produce methane by fermenting acetate and H2-CO2 into methane and carbon dioxide. H3C-COOH → CH4 + CO2Depending on the wetland and type of archaea, hydrogenotrophic methanogenesis, another process that yields methane, can also occur. This process occurs as a result of archaea oxidizing hydrogen with carbon dioxide to yield methane and water. 4H2 + CO2 → CH4 + 2H2O Diffusion Diffusion through the profile refers to the movement of methane up through soil and bodies of water to reach the atmosphere. The importance of diffusion as a pathway varies per wetland based on the type of soil and vegetation. For example, in peatlands, the mass amount of dead, but not decaying, organic matter results in relatively slow diffusion of methane through the soil. Additionally, because methane can travel more quickly through soil than water, diffusion plays a much bigger role in wetlands with drier, more loosely compacted soil. Aerenchyma Plant aerenchyma refers to the vessel-like transport tubes within the tissues of certain kinds of plants. Plants with aerenchyma possess porous tissue that allows for direct travel of gases to and from the plant roots. Methane can travel directly up from the soil into the atmosphere using this transport system. The direct "shunt" created by the aerenchyma allows for methane to bypass oxidation by oxygen that is also transported by the plants to their roots. Ebullition Ebullition refers to the sudden release of bubbles of methane into the air. These bubbles occur as a result of methane building up over time in the soil, forming pockets of methane gas. As these pockets of trapped methane grow in size, the level of the soil will slowly rise up as well. This phenomenon continues until so much pressure builds up that the bubble "pops," transporting the methane up through the soil so quickly that it does not have time to be consumed by the methanotrophic organisms in the soil. With this release of gas, the level of soil then falls once more. Ebullition in wetlands can be recorded by delicate sensors, called piezometers, that can detect the presence of pressure pockets within the soil. Hydraulic heads are also used to detect the subtle rising and falling of the soil as a result of pressure build up and release. Using piezometers and hydraulic heads, a study was done in northern United States peatlands to determine the significance of ebullition as a source of methane. Not only was it determined that ebullition is in fact a significant source of methane emissions in northern United States peatlands, but it was also observed that there was an increase in pressure after significant rainfall, suggesting that rainfall is directly related to methane emissions in wetlands. Controlling factors The magnitude of methane emission from a wetland are usually measured using eddy covariance, gradient or chamber flux techniques, and depends upon several factors, including water table, comparative ratios of methanogenic bacteria to methanotrophic bacteria, transport mechanisms, temperature, substrate type, plant life, and climate. These factors work together to effect and control methane flux in wetlands. Overall the main determinant of net flux of methane into the atmosphere is the ratio of methane produced by methanogenic bacteria that makes it to the surface relative to the amount of methane that is oxidized by methanotrophic bacteria before reaching the atmosphere. This ratio is in turn affected by the other controlling factors of methane in the environment. Additionally, pathways of methane emission affect how the methane travels into the atmosphere and thus have an equal effect on methane flux in wetlands. Water table The first controlling factor to consider is the level of the water table. Not only does pool and water table location determine the areas where methane production or oxidation may take place, but it also determines how quickly methane can diffuse into the air. When traveling through water, the methane molecules run into the quickly moving water molecules and thus take a longer time to reach the surface. Travel through soil, however, is much easier and results in easier diffusion into the atmosphere. This theory of movement is supported by observations made in wetlands where significant fluxes of methane occurred after a drop in the water table due to drought. If the water table is at or above the surface, then methane transport begins to take place primarily through ebullition and vascular or pressurized plant mediated transport, with high levels of emission occurring during the day from plants that use pressurized ventilation. Temperature Temperature is also an important factor to consider as the environmental temperature—and temperature of the soil in particular—affects the metabolic rate of production or consumption by bacteria. Additionally, because methane fluxes occur annually with the seasons, evidence is provided that suggests that the temperature changing coupled with water table level work together to cause and control the seasonal cycles. Substrate composition The composition of soil and substrate availability change the nutrients available for methanogenic and methanotrophic bacteria, and thus directly affects the rate of methane production and consumption. For example, wetlands soils with high levels of acetate or hydrogen and carbon dioxide are conducive to methane production. Additionally, the type of plant life and amount of plant decomposition affects the nutrients available to the bacteria as well as the acidity. Plant leachates such as phenolic compounds from Sphagnum can also interact with soil characteristics to influence methane production and consumption. A constant availability of cellulose and a soil pH of about 6.0 have been determined to provide optimum conditions for methane production and consumption; however, substrate quality can be overridden by other factors. Soil pH and composition must still be compared to the effects of water table and temperature. Net ecosystem production Net ecosystem production (NEP) and climate changes are the all encompassing factors that have been shown to have a direct relationship with methane emissions from wetlands. In wetlands with high water tables, NEP has been shown to increase and decrease with methane emissions, most likely due to the fact that both NEP and methane emissions flux with substrate availability and soil composition. In wetlands with lower water tables, the movement of oxygen in and out of the soil can increase the oxidation of methane and the inhibition of methanogenesis, nulling the relationship between methane emission and NEP because methane production becomes dependent upon factors deep within the soil. A changing climate affects many factors within the ecosystem, including water table, temperature, and plant composition within the wetland—all factors that affect methane emissions. However, climate change can also affect the amount of carbon dioxide in the surrounding atmosphere, which would in turn decrease the addition of methane into the atmosphere, as shown by an 80% decrease in methane flux in areas of doubled carbon dioxide levels. Causes for additional emissions Human development of wetlands Humans often drain wetlands in the name of development, housing, and agriculture. By draining wetlands, the water table is thus lowered, increasing consumption of methane by the methanotrophic bacteria in the soil. However, as a result of draining, water saturated ditches develop, which due to the warm, moist environment, end up emitting a large amount of methane. Therefore, the actual effect on methane emission strongly ends up depending on several factors. If the drains are not spaced far enough apart, then saturated ditches will form, creating mini wetland environments. Additionally, if the water table is lowered significantly enough, then the wetland can actually be transformed from a source of methane into a sink that consumes methane. Finally, the actual composition of the original wetland changes how the surrounding environment is affected by the draining and human development. == References ==
greenhouse gas emissions in kentucky
This article is intended to give an overview of the greenhouse gas emissions in the U.S. state of Kentucky. Greenhouse gas inventory The report "Kentucky Greenhouse Gas Inventory" provides a detailed inventory of greenhouse gas emissions and sinks for Kentucky in 1990. Emissions were estimated using methods from EPA's 1995 guidance document State Workbook: Methodologies for Estimating Greenhouse Gas Emissions. In 1990, Kentucky emitted 35.4 million metric tons of carbon equivalent (MMTCE). In addition, Kentucky estimated emissions of 0.4 MMTCE from biofuels. Emissions from biofuels are not included. The principal of greenhouse gases were carbon dioxide, comprising 87.9 million metric tons (24.0 MMTCE), and methane, with 1.1 million metric tons (6.4 MMTCE). Other emissions included 0.0016 million metric tons of perfluorocarbons (PFCs) (4.8 MMTCE), and 0.003 million metric tons of nitrous oxide (0.2 MMTCE). The major source of carbon dioxide emissions was fossil fuel combustion (96%), the majority of which is utility coal. Minor emissions came from cement and lime production and forest/grassland conversion. Carbon dioxide sinks (i.e., an increase in forest carbon storage) offset about 26% of the total carbon dioxide emissions. Sources of methane emissions were coal mining (73%), domesticated animals (12%), landfills (10%), manure management (3%), and natural gas/oil extraction (2%). Nitrous oxide emissions were from fertilizer use. Sources of perfluorocarbons were HCFC-22 production (91%) and aluminum production (9%). Compared to the 1990 U.S. emissions of 6.4 MTCE per capita, Kentucky's emissions were 9.6 MTCE per person. Due to the substantial amount of coal-related activities taking place in Kentucky, the state has high emissions per person. Coal-seam fires A great deal of greenhouse gas emissions and toxic pollutants in Kentucky originate in coal-seam fires. These can continue to burn for hundreds of years due to the low supply of oxygen leading to a slow rate of skin burns.Coal-seam fires in Kentucky include the Truman Shepherd fire in Floyd and Knott counties (1400 t/yr CO2 in 2010), the Ruth Mullins fire in Perry County (726 t/yr CO2 in 2010), the Old Smokey fire in Floyd County, The Truman Shepherd fire was brought down to less than 66 t/yr CO2 by 2013 through mitigating actions. Carbon-dioxide storage Carbon sequestration is the process of injecting carbon-dioxide into geological formations in order to store it and prevent it entering the atmosphere. Pumping carbon-dioxide into geological formations has been done in the oil industry for some time for the purpose of extracting oil. For this reason the technology is considered proven, at least as far as the physical pumping is concerned. Unmineable coal seams are one possible formation that could be used for this purpose. As of 2010, there are plans to conduct a feasibility study in conjunction with the Kentucky Geological Survey. The Kentucky Carbon Storage Foundation will drill the test well. The facility is intended to serve a coal gasification plant planned by Peabody Energy and ConocoPhillips.In 2020, Kentucky power plants were emitting 53,725,429 tons of carbon dioxide into the air. References This article incorporates public domain material from Kentucky Greenhouse Gas Emissions and Sinks Inventory: Summary (PDF). U.S. Environmental Protection Agency. External links Hugh T. Spencer, Climate change mitigation strategies for Kentucky (U.S. Environmental Protection Agency), 30 June 1998. Climate Change - State and Local Governments: Kentucky, U.S. Environmental Protection Agency, archived 27 September 2009. Erin Courtenay, Climate Change Claims Yet Another Victim: Kentucky Bourbon, TreeHugger, 24 January 2006. Climate Change in Kentucky (Next Generation Earth), archived 3 March 2009.
greenhouse gas monitoring
Greenhouse gas monitoring is the direct measurement of greenhouse gas emissions and levels. There are several different methods of measuring carbon dioxide concentrations in the atmosphere, including infrared analyzing and manometry. Methane and nitrous oxide are measured by other instruments. Greenhouse gases are measured from space such as by the Orbiting Carbon Observatory and networks of ground stations such as the Integrated Carbon Observation System. Methodology Carbon dioxide monitoring Manometry Manometry is a key measurement tool for atmospheric carbon dioxide by first measuring the volume, temperature, and pressure of a particular amount of dry air. The air sample is dried by passing it through multiple dry ice traps and then collecting it in a five-liter vessel. The temperature is taken via a thermometer and pressure is calculated using manometry. Then, liquid nitrogen is added, causing the carbon dioxide to condense and become measurable by volume. The ideal gas law is accurate to 0.3% in these pressure conditions. Infrared gas analyzer Infrared analyzers were used at Mauna Loa Observatory and at Scripps Institution of Oceanography between 1958 and 2006. IR analyzers operate by pumping an unknown sample of dry air through a 40 cm long cell. A reference cell contains dry carbon dioxide-free air. A glowing nichrome filament radiates broadband IR radiation which splits into two beams and passes through the gas cells. Carbon dioxide absorbs some of the radiation, allowing more radiation that passes through the reference cell to reach the detector than radiation passing through the sample cell. Data is collected on a strip chart recorder. The concentration of carbon dioxide in the sample is quantified by calibrating with a standard gas of known carbon dioxide content. Titrimetry Titrimetry is another method of measuring atmospheric carbon dioxide that was first used by a Scandinavian group at 15 different ground stations. They began passing a 100.0 mL air sample through a solution of barium hydroxide containing cresolphthalein indicator. Methane gas monitoring Differential absorption lidar Range-resolved infrared differential absorption lidar (DIAL) is a means of measuring methane emissions from various sources, including active and closed landfill sites. The DIAL takes vertical scans above methane sources and then spatially separates the scans to accurately measure the methane emissions from individual sources. Measuring methane emissions is a crucial aspect of climate change research, as methane is among the most impactful gaseous hydrocarbon species. Nitrous oxide monitoring Atmospheric Chemistry Experiment‐Fourier Transform Spectrometer (ACE-FTS) Nitrous oxide is one of the most prominent anthropogenic ozone-depleting gases in the atmosphere. It is released into the atmosphere primarily through natural sources such as soil and rock, as well as anthropogenic process like farming. Atmospheric nitrous oxide is also created in the atmosphere as a product of a reaction between nitrogen and electronically excited ozone in the lower thermosphere. The Atmospheric Chemistry Experiment‐Fourier Transform Spectrometer (ACE-FTS) is a tool used for measuring nitrous oxide concentrations in the upper to lower troposphere. This instrument, which is attached to the Canadian satellite SCISAT, has shown that nitrous oxide is present throughout the entire atmosphere during all seasons, primarily due to energetic particle precipitation. Measurements taken by the instrument show that different reactions create nitrous oxide in the lower thermosphere than in the mid to upper mesosphere. The ACE-FTS is a crucial resource in predicting future ozone depletion in the upper stratosphere by comparing the different ways in which nitrous oxide is released into the atmosphere. Satellite monitoring Orbiting Carbon Observatory (OCO, OCO-2, OCO-3) The Orbiting Carbon Observatory (OCO) was first launched in February 2009 but was lost due to launch failure. The Satellite was launched again in 2014, this time called the Orbiting Carbon Observatory-2, with an estimated lifespan of about two years. The apparatus uses spectrometers to take 24 carbon dioxide concentration measurements per second of Earth's atmosphere. The measurements taken by OCO-2 can be used for global atmospheric models and will allow scientists to locate carbon sources when its data is paired with wind patterns. The Orbiting Carbon Observatory-3 operates from the International Space Station (ISS). Greenhouse Gases Observing Satellite (GOSat) Satellite observations provides accurate readings of carbon dioxide and methane gas concentrations for short-term and long-term purposes in order to detect changes over time. The goals of this satellite, released in January 2009, is to monitor both carbon dioxide and methane gas in the atmosphere, and to identify their sources. GOSat is a project of three main entities: the Japan Aerospace Exploration Agency (JAXA), the Ministry of the Environment (MOE), and the National Institute for Environmental Studies (NIES). Ground stations Integrated Carbon Observation System (ICOS) The Integrated Carbon Observation System was established in October 2015 in Helsinki, Finland as a European Research Infrastructure Consortium (ERIC). The main task of ICOS is to establish an Integrated Carbon Observation System Research Infrastructure (ICOS RI) that facilitates research on greenhouse gas emissions, sinks, and their causes. The ICOS ERIC strives to link its own research with other greenhouse gas emissions research to produce coherent data products and to promote education and innovation. See also Carbon accounting Greenhouse gas inventory Infrared gas analyzer Mauna Loa Observatory Keeling Curve External links Climate Trace Public GHG monitoring expected from mid-2021 == References ==
representative concentration pathway
A Representative Concentration Pathway (RCP) is a greenhouse gas concentration (not emissions) trajectory adopted by the IPCC. Four pathways were used for climate modeling and research for the IPCC Fifth Assessment Report (AR5) in 2014. The pathways describe different climate change scenarios, all of which are considered possible depending on the amount of greenhouse gases (GHG) emitted in the years to come. The RCPs – originally RCP2.6, RCP4.5, RCP6, and RCP8.5 – are labelled after a possible range of radiative forcing values in the year 2100 (2.6, 4.5, 6, and 8.5 W/m2, respectively). The higher values mean higher greenhouse gas emissions and therefore higher global temperatures and more pronounced effects of climate change. The lower RCP values, on the other hand, are more desirable for humans but require more stringent climate change mitigation efforts to achieve them. A short description of the four RCPs is as follows: RCP 1.9 is a pathway that limits global warming to below 1.5 °C, the aspirational goal of the Paris Agreement. RCP 2.6 is a "very stringent" pathway. RCP 3.4 represents an intermediate pathway between the "very stringent" RCP2.6 and less stringent mitigation efforts associated with RCP4.5. RCP 4.5 is described by the IPCC as an intermediate scenario. In RCP 6, emissions peak around 2080, then decline. RCP7 is a baseline outcome rather than a mitigation target. In RCP 8.5 emissions continue to rise throughout the 21st century.: Figure 2, p. 223 Since IPCC's Fifth Assessment report the original pathways are being considered together with Shared Socioeconomic Pathways: as are new RCPs such as RCP1.9, RCP3.4 and RCP7. Concentrations The RCPs are consistent with a wide range of possible changes in future anthropogenic (i.e., human) greenhouse gas emissions, and aim to represent their atmospheric concentrations. Despite characterizing RCPs in terms of inputs, a key change from the 2007 to the 2014 IPCC report is that the RCPs ignore the carbon cycle by focusing on concentrations of greenhouse gases, not greenhouse gas inputs. The IPCC studies the carbon cycle separately, predicting higher ocean uptake of carbon corresponding to higher concentration pathways, but land carbon uptake is much more uncertain due to the combined effect of climate change and land use changes.The four RCPs are consistent with certain socio-economic assumptions but are being substituted with the shared socioeconomic pathways which are anticipated to provide flexible descriptions of possible futures within each RCP. The RCP scenarios superseded the Special Report on Emissions Scenarios projections published in 2000 and were based on similar socio-economic models. Pathways used in modelling RCP 1.9 RCP 1.9 is a pathway that limits global warming to below 1.5 °C, the aspirational goal of the Paris Agreement. RCP 2.6 RCP 2.6 is a "very stringent" pathway. According to the IPCC, RCP 2.6 requires that carbon dioxide (CO2) emissions start declining by 2020 and go to zero by 2100. It also requires that methane emissions (CH4) go to approximately half the CH4 levels of 2020, and that sulphur dioxide (SO2) emissions decline to approximately 10% of those of 1980–1990. Like all the other RCPs, RCP 2.6 requires negative CO2 emissions (such as CO2 absorption by trees). For RCP 2.6, those negative emissions would be on average 2 Gigatons of CO2 per year (GtCO2/yr). RCP 2.6 is likely to keep global temperature rise below 2 °C by 2100. RCP 3.4 RCP 3.4 represents an intermediate pathway between the "very stringent" RCP2.6 and less stringent mitigation efforts associated with RCP4.5. As well as just providing another option a variant of RCP3.4 includes considerable removal of greenhouse gases from the atmosphere.A 2021 paper suggests that the most plausible projections of cumulative CO2 emissions (having a 0.1% or 0.3% tolerance with historical accuracy) tend to suggest that RCP 3.4 (3.4 W/m^2, 2.0–2.4 degrees Celsius warming by 2100 according to study) is the most plausible pathway. RCP 4.5 RCP 4.5 is described by the IPCC as an intermediate scenario. Emissions in RCP 4.5 peak around 2040, then decline.: Figure 2, p. 223  According to resource specialists IPCC emission scenarios are biased towards exaggerated availability of fossil fuels reserves; RCP 4.5 is the most probable baseline scenario (no climate policies) taking into account the exhaustible character of non-renewable fuels.According to the IPCC, RCP 4.5 requires that carbon dioxide (CO2) emissions start declining by approximately 2045 to reach roughly half of the levels of 2050 by 2100. It also requires that methane emissions (CH4) stop increasing by 2050 and decline somewhat to about 75% of the CH4 levels of 2040, and that sulphur dioxide (SO2) emissions decline to approximately 20% of those of 1980–1990. Like all the other RCPs, RCP 4.5 requires negative CO2 emissions (such as CO2 absorption by trees). For RCP 4.5, those negative emissions would be 2 Gigatons of CO2 per year (GtCO2/yr). RCP 4.5 is more likely than not to result in global temperature rise between 2 °C and 3 °C, by 2100 with a mean sea level rise 35% higher than that of RCP 2.6. Many plant and animal species will be unable to adapt to the effects of RCP 4.5 and higher RCPs. RCP 6 In RCP 6, emissions peak around 2080, then decline. The RCP 6.0 scenario uses a high greenhouse gas emission rate and is a stabilisation scenario where total radiative forcing is stabilised after 2100 by employment of a range of technologies and strategies for reducing greenhouse gas emissions. 6.0 W/m2 refers to the radiative forcing reached by 2100 Projections for temperature according to RCP 6.0 include continuous global warming through 2100 where CO2 levels rise to 670 ppm by 2100 making the global temperature rise by about 3–4 °C by 2100. RCP 7 RCP7 is a baseline outcome rather than a mitigation target. RCP 8.5 In RCP 8.5 emissions continue to rise throughout the 21st century.: Figure 2, p. 223  Since AR5 this has been thought to be very unlikely, but still possible as feedbacks are not well understood. RCP8.5, generally taken as the basis for worst-case climate change scenarios, was based on what proved to be overestimation of projected coal outputs. It is still used for predicting mid-century (and earlier) emissions based on current and stated policies. Projections based on the RCPs 21st century Mid- and late-21st century (2046–2065 and 2081–2100 averages, respectively) projections of global warming and global mean sea level rise from the IPCC Fifth Assessment Report (IPCC AR5 WG1) are tabulated below. The projections are relative to temperatures and sea levels in the late-20th to early-21st centuries (1986–2005 average). Temperature projections can be converted to a reference period of 1850–1900 or 1980–99 by adding 0.61 or 0.11 °C, respectively. Across all RCPs, global mean temperature is projected to rise by 0.3 to 4.8 °C by the late 21st century. According to a 2021 study in which plausible AR5 and SSP scenarios of CO2 emissions are selected, | RCP 8.6 || {insert Here} | Across all RCPs, global mean sea level is projected to rise by 0.26 to 0.82 m by the late-21st century. 23rd century AR5 also projects changes in climate beyond the 21st century. The extended RCP2.6 pathway assumes sustained net negative anthropogenic GHG emissions after the year 2070. "Negative emissions" means that in total, humans absorb more GHGs from the atmosphere than they release. The extended RCP8.5 pathway assumes continued anthropogenic GHG emissions after 2100. In the extended RCP 2.6 pathway, atmospheric CO2 concentrations reach around 360 ppmv by 2300, while in the extended RCP8.5 pathway, CO2 concentrations reach around 2000 ppmv in 2250, which is nearly seven times the pre-industrial level.For the extended RCP2.6 scenario, global warming of 0.0 to 1.2 °C is projected for the late-23rd century (2281–2300 average), relative to 1986–2005. For the extended RCP8.5, global warming of 3.0 to 12.6 °C is projected over the same time period. See also Climate change scenario Coupled Model Intercomparison Project IPCC Sixth Assessment Report (2021) Special Report on Emissions Scenarios (2000) Shared Socioeconomic Pathways References External links Special Issue: The representative concentration pathways: an overview, Climatic Change, Volume 109, Issue 1–2, November 2011. Most papers in this issue are freely accessible. The Guardian: A guide to the IPCC's new RCP emissions pathways
emissions trading
Emissions trading is a market-based approach to controlling pollution by providing economic incentives for reducing the emissions of pollutants. The concept is also known as cap and trade (CAT) or emissions trading scheme (ETS). One prominent example is carbon emission trading for CO2 and other greenhouse gases which is a tool for climate change mitigation. Other schemes include sulfur dioxide and other pollutants. In an emissions trading scheme, a central authority or governmental body allocates or sells a limited number (a "cap") of permits that allow a discharge of a specific quantity of a specific pollutant over a set time period. Polluters are required to hold permits in amount equal to their emissions. Polluters that want to increase their emissions must buy permits from others willing to sell them.Emissions trading is a type of flexible environmental regulation that allows organizations and markets to decide how best to meet policy targets. This is in contrast to command-and-control environmental regulations such as best available technology (BAT) standards and government subsidies. Introduction Pollution is a prime example of a market externality. An externality is an effect of some activity on an entity (such as a person) that is not party to a market transaction related to that activity. Emissions trading is a market-based approach to address pollution. The overall goal of an emissions trading plan is to minimize the cost of meeting a set emissions target. In an emissions trading system, the government sets an overall limit on emissions, and defines permits (also called allowances), or limited authorizations to emit, up to the level of the overall limit. The government may sell the permits, but in many existing schemes, it gives permits to participants (regulated polluters) equal to each participant's baseline emissions. The baseline is determined by reference to the participant's historical emissions. To demonstrate compliance, a participant must hold permits at least equal to the quantity of pollution it actually emitted during the time period. If every participant complies, the total pollution emitted will be at most equal to the sum of individual limits. Because permits can be bought and sold, a participant can choose either to use its permits exactly (by reducing its own emissions); or to emit less than its permits, and perhaps sell the excess permits; or to emit more than its permits, and buy permits from other participants. In effect, the buyer pays a charge for polluting, while the seller gains a reward for having reduced emissions. Emissions Trading results in the incorporation of economic costs into the costs of production which incentivizes corporations to consider investment returns and capital expenditure decisions with a model that includes the price of carbon and greenhouse gases (GHG). In many schemes, organizations which do not pollute (and therefore have no obligations) may also trade permits and financial derivatives of permits. In some schemes, participants can bank allowances to use in future periods. In some schemes, a proportion of all traded permits must be retired periodically, causing a net reduction in emissions over time. Thus, environmental groups may buy and retire permits, driving up the price of the remaining permits according to the law of demand. In most schemes, permit owners can donate permits to a nonprofit entity and receive a tax deductions. Usually, the government lowers the overall limit over time, with an aim towards a national emissions reduction target.There are active trading programs in several air pollutants. An earlier application was the US national market to reduce acid rain. The United States now has several regional markets in nitrogen oxides. History The efficiency of what later was to be called the "cap-and-trade" approach to air pollution abatement was first demonstrated in a series of micro-economic computer simulation studies between 1967 and 1970 for the National Air Pollution Control Administration (predecessor to the United States Environmental Protection Agency's Office of Air and Radiation) by Ellison Burton and William Sanjour. These studies used mathematical models of several cities and their emission sources in order to compare the cost and effectiveness of various control strategies. Each abatement strategy was compared with the "least-cost solution" produced by a computer optimization program to identify the least-costly combination of source reductions in order to achieve a given abatement goal. In each case it was found that the least-cost solution was dramatically less costly than the same amount of pollution reduction produced by any conventional abatement strategy. Burton and later Sanjour along with Edward H. Pechan continued improving and advancing these computer models at the newly created U.S. Environmental Protection Agency. The agency introduced the concept of computer modeling with least-cost abatement strategies (i.e., emissions trading) in its 1972 annual report to Congress on the cost of clean air. This led to the concept of "cap and trade" as a means of achieving the "least-cost solution" for a given level of abatement. The development of emissions trading over the course of its history can be divided into four phases: Gestation: Theoretical articulation of the instrument (by Coase, Crocker, Dales, Montgomery etc.) and, independent of the former, tinkering with "flexible regulation" at the US Environmental Protection Agency. Proof of Principle: First developments towards trading of emission certificates based on the "offset-mechanism" taken up in Clean Air Act in 1977. A company could get allowance from the Act on a greater amount of emission when it paid another company to reduce the same pollutant. Prototype: Launching of a first "cap-and-trade" system as part of the US Acid Rain Program in Title IV of the 1990 Clean Air Act, officially announced as a paradigm shift in environmental policy, as prepared by "Project 88", a network-building effort to bring together environmental and industrial interests in the US. Regime formation: branching out from the US clean air policy to global climate policy, and from there to the European Union, along with the expectation of an emerging global carbon market and the formation of the "carbon industry".In the United States, the acid rain related emission trading system was principally conceived by C. Boyden Gray, a G.H.W. Bush administration attorney. Gray worked with the Environmental Defense Fund (EDF), who worked with the EPA to write the bill that became law as part of the Clean Air Act of 1990. The new emissions cap on NOx and SO2 gases took effect in 1995, and according to Smithsonian magazine, those acid rain emissions dropped 3 million tons that year. Economics It is possible for a country to reduce emissions using a command-and-control approach, such as regulation, direct and indirect taxes. The cost of that approach differs between countries because the Marginal Abatement Cost Curve (MAC)—the cost of eliminating an additional unit of pollution—differs by country. Coase model Coase (1960) argued that social costs could be accounted for by negotiating property rights according to a particular objective. Coase's model assumes perfectly operating markets and equal bargaining power among those arguing for property rights. In Coase's model, efficiency, i.e., achieving a given reduction in emissions at lowest cost, is promoted by the market system. This can also be looked at from the perspective of having the greatest flexibility to reduce emissions. Flexibility is desirable because the marginal costs, that is to say, the incremental costs of reducing emissions, varies among countries. Emissions trading allows emission reductions to be first made in locations where the marginal costs of abatement are lowest (Bashmakov et al., 2001). Over time, efficiency can also be promoted by allowing "banking" of permits (Goldemberg et al., 1996, p. 30). This allows polluters to reduce emissions at a time when it is most efficient to do so. Equity One of the advantages of Coase's model is that it suggests that fairness (equity) can be addressed in the distribution of property rights, and that regardless of how these property rights are assigned, the market will produce the most efficient outcome. In reality, according to the held view, markets are not perfect, and it is therefore possible that a trade-off will occur between equity and efficiency (Halsnæs et al., 2007). Trading In an emissions trading system, permits may be traded by emitters who are liable to hold a sufficient number of permits in system. Some analysts argue that allowing others to participate in trading, e.g., private brokerage firms, can allow for better management of risk in the system, e.g., to variations in permit prices (Bashmakov et al., 2001). It may also improve the efficiency of system. According to Bashmakov et al. (2001), regulation of these other entities may be necessary, as is done in other financial markets, e.g., to prevent abuses of the system, such as insider trading. Incentives and allocation Emissions trading gives polluters an incentive to reduce their emissions. However, there are possible perverse incentives that can exist in emissions trading. Allocating permits on the basis of past emissions ("grandfathering") can result in firms having an incentive to maintain emissions. For example, a firm that reduced its emissions would receive fewer permits in the future (IMF, 2008, pp. 25–26). There are costs that emitters do face, e.g., the costs of the fuel being used, but there are other costs that are not necessarily included in the price of a good or service. These other costs are called external costs (Halsnæs et al., 2007). This problem can also be criticized on ethical grounds, since the polluter is being paid to reduce emissions (Goldemberg et al., 1996, p. 38). On the other hand, a permit system where permits are auctioned rather than given away, provides the government with revenues. These revenues might be used to improve the efficiency of overall climate policy, e.g., by funding energy efficiency programs (ACEEE 2019) or reductions in distortionary taxes (Fisher et al., 1996, p. 417).In Coase's model of social costs, either choice (grandfathering or auctioning) leads to efficiency. In reality, grandfathering subsidizes polluters, meaning that polluting industries may be kept in business longer than would otherwise occur. Grandfathering may also reduce the rate of technological improvement towards less polluting technologies (Fisher et al., 1996, p. 417). William Nordhaus argues that allocations cost the economy as they cause the under utilisation an efficient form of taxation. Nordhaus argues that normal income, goods or service taxes distort efficient investment and consumption, so by using pollution taxes to generate revenue an emissions scheme can increase the efficiency of the economy.Form of allocation The economist Ross Garnaut states that permits allocated to existing emitters by 'grandfathering' are not 'free'. As the permits are scarce they have value and the benefit of that value is acquired in full by the emitter. The cost is imposed elsewhere in the economy, typically on consumers who cannot pass on the costs. Market and least-cost Some economists have urged the use of market-based instruments such as emissions trading to address environmental problems instead of prescriptive "command-and-control" regulation. Command and control regulation is criticized for being insensitive to geographical and technological differences, and therefore inefficient; however, this is not always so, as shown by the WWII rationing program in the U.S. in which local and regional boards made adjustments for these differences.After an emissions limit has been set by a government political process, individual companies are free to choose how or whether to reduce their emissions. Failure to report emissions and surrender emission permits is often punishable by a further government regulatory mechanism, such as a fine that increases costs of production. Firms will choose the least-cost way to comply with the pollution regulation, which will lead to reductions where the least expensive solutions exist, while allowing emissions that are more expensive to reduce. Under an emissions trading system, each regulated polluter has flexibility to use the most cost-effective combination of buying or selling emission permits, reducing its emissions by installing cleaner technology, or reducing its emissions by reducing production. The most cost-effective strategy depends on the polluter's marginal abatement cost and the market price of permits. In theory, a polluter's decisions should lead to an economically efficient allocation of reductions among polluters, and lower compliance costs for individual firms and for the economy overall, compared to command-and-control mechanisms. Measuring, reporting, verification and enforcement In some industrial processes, emissions can be physically measured by inserting sensors and flowmeters in chimneys and stacks, but many types of activity rely on theoretical calculations instead of measurement. Depending on local legislation, measurements may require additional checks and verification by government or third party auditors, prior or post submission to the local regulator. Enforcement methods include fines and sanctions for polluters that have exceeded their allowances. Concerns include the cost of MRV and enforcement, and the risk that facilities may lie about actual emissions. Pollution markets An emission license directly confers a right to emit pollutants up to a certain rate. In contrast, a pollution license for a given location confers the right to emit pollutants at a rate which will cause no more than a specified increase at the pollution-level. For concreteness, consider the following model. There are n {\displaystyle n} agents each of which emits e i {\displaystyle e_{i}} pollutants. There are m {\displaystyle m} locations each of which suffers pollution q i {\displaystyle q_{i}} . The pollution is a linear combination of the emissions. The relation between e {\displaystyle e} and q {\displaystyle q} is given by a diffusion matrix H {\displaystyle H} , such that: q = H ⋅ e {\displaystyle q=H\cdot e} .As an example, consider three countries along a river (as in the fair river sharing setting). Pollution in the upstream country is determined only by the emission of the upstream country: q 1 = e 1 {\displaystyle q_{1}=e_{1}} . Pollution in the middle country is determined by its own emission and by the emission of country 1: q 2 = e 1 + e 2 {\displaystyle q_{2}=e_{1}+e_{2}} . Pollution in the downstream country is the sum of all emissions: q 3 = e 1 + e 2 + e 3 {\displaystyle q_{3}=e_{1}+e_{2}+e_{3}} .So the matrix H {\displaystyle H} in this case is a triangular matrix of ones. Each pollution-license for location i {\displaystyle i} permits its holder to emit pollutants that will cause at most this level of pollution at location i {\displaystyle i} . Therefore, a polluter that affects water quality at a number of points has to hold a portfolio of licenses covering all relevant monitoring-points. In the above example, if country 2 wants to emit a unit of pollutant, it should purchase two permits: one for location 2 and one for location 3. Montgomery shows that, while both markets lead to efficient license allocation, the market in pollution-licenses is more widely applicable than the market in emission-licenses. International emissions trading The nature of the pollutant plays a very important role when policy-makers decide which framework should be used to control pollution. CO2 acts globally, thus its impact on the environment is generally similar wherever in the globe it is released. So the location of the originator of the emissions does not matter from an environmental standpoint.The policy framework should be different for regional pollutants (e.g. SO2 and NOx, and also mercury) because the impact of these pollutants may differ by location. The same amount of a regional pollutant can exert a very high impact in some locations and a low impact in other locations, so it matters where the pollutant is released. This is known as the Hot Spot problem. A Lagrange framework is commonly used to determine the least cost of achieving an objective, in this case the total reduction in emissions required in a year. In some cases, it is possible to use the Lagrange optimization framework to determine the required reductions for each country (based on their MAC) so that the total cost of reduction is minimized. In such a scenario, the Lagrange multiplier represents the market allowance price (P) of a pollutant, such as the current market price of emission permits in Europe and the USA.Countries face the permit market price that exists in the market that day, so they are able to make individual decisions that would minimize their costs while at the same time achieving regulatory compliance. This is also another version of the Equi-Marginal Principle, commonly used in economics to choose the most economically efficient decision. Prices versus quantities, and the safety valve There has been longstanding debate on the relative merits of price versus quantity instruments to achieve emission reductions.An emission cap and permit trading system is a quantity instrument because it fixes the overall emission level (quantity) and allows the price to vary. Uncertainty in future supply and demand conditions (market volatility) coupled with a fixed number of pollution permits creates an uncertainty in the future price of pollution permits, and the industry must accordingly bear the cost of adapting to these volatile market conditions. The burden of a volatile market thus lies with the industry rather than the controlling agency, which is generally more efficient. However, under volatile market conditions, the ability of the controlling agency to alter the caps will translate into an ability to pick "winners and losers" and thus presents an opportunity for corruption. In contrast, an emission tax is a price instrument because it fixes the price while the emission level is allowed to vary according to economic activity. A major drawback of an emission tax is that the environmental outcome (e.g. a limit on the amount of emissions) is not guaranteed. On one hand, a tax will remove capital from the industry, suppressing possibly useful economic activity, but conversely, the polluter will not need to hedge as much against future uncertainty since the amount of tax will track with profits. The burden of a volatile market will be borne by the controlling (taxing) agency rather than the industry itself, which is generally less efficient. An advantage is that, given a uniform tax rate and a volatile market, the taxing entity will not be in a position to pick "winners and losers" and the opportunity for corruption will be less. Assuming no corruption and assuming that the controlling agency and the industry are equally efficient at adapting to volatile market conditions, the best choice depends on the sensitivity of the costs of emission reduction, compared to the sensitivity of the benefits (i.e., climate damage avoided by a reduction) when the level of emission control is varied. A third option, known as a safety valve, is a hybrid of the price and quantity instruments. The system is essentially an emission cap and permit trading system but the maximum (or minimum) permit price is capped. Emitters have the choice of either obtaining permits in the marketplace or buying them from the government at a specified trigger price (which could be adjusted over time). The system is sometimes recommended as a way of overcoming the fundamental disadvantages of both systems by giving governments the flexibility to adjust the system as new information comes to light. It can be shown that by setting the trigger price high enough, or the number of permits low enough, the safety valve can be used to mimic either a pure quantity or pure price mechanism. Comparison with other methods of emission reduction Cap and trade is the textbook example of an emissions trading program. Other market-based approaches include baseline-and-credit, and pollution tax. They all put a price on pollution (for example, see carbon price), and so provide an economic incentive to reduce pollution beginning with the lowest-cost opportunities. By contrast, in a command-and-control approach, a central authority designates pollution levels each facility is allowed to emit. Cap and trade essentially functions as a tax where the tax rate is variable based on the relative cost of abatement per unit, and the tax base is variable based on the amount of abatement needed. Baseline and credit In a baseline and credit program, polluters can create permits, called credits or offsets, by reducing their emissions below a baseline level, which is often the historical emissions level from a designated past year. Such credits can be bought by polluters that have a regulatory limit. Pollution tax Emissions fees or environmental tax is a surcharge on the pollution created while producing goods and services. For example, a carbon tax is a tax on the carbon content of fossil fuels that aims to discourage their use and thereby reduce carbon dioxide emissions. The two approaches are overlapping sets of policy designs. Both can have a range of scopes, points of regulation, and price schedules. They can be fair or unfair, depending on how the revenue is used. Both have the effect of increasing the price of goods (such as fossil fuels) to consumers. A comprehensive, upstream, auctioned cap-and-trade system is very similar to a comprehensive, upstream carbon tax. Yet, many commentators sharply contrast the two approaches. The main difference is what is defined and what derived. A tax is a price control, while a cap-and-trade system is a quantity control instrument. That is, a tax is a unit price for pollution that is set by authorities, and the market determines the quantity emitted; in cap and trade, authorities determine the amount of pollution, and the market determines the price. This difference affects a number of criteria.Responsiveness to inflation: Cap-and-trade has the advantage that it adjusts to inflation (changes to overall prices) automatically, while emissions fees must be changed by regulators. Responsiveness to cost changes: It is not clear which approach is better. It is possible to combine the two into a safety valve price: a price set by regulators, at which polluters can buy additional permits beyond the cap. Responsiveness to recessions: This point is closely related to responsiveness to cost changes, because recessions cause a drop in demand. Under cap and trade, the emissions cost automatically decreases, so a cap-and-trade scheme adds another automatic stabilizer to the economy—in effect, an automatic fiscal stimulus. However, a lower pollution price also results in reduced efforts to reduce pollution. If the government is able to stimulate the economy regardless of the cap-and-trade scheme, an excessively low price causes a missed opportunity to cut emissions faster than planned. Instead, it might be better to have a price floor (a tax). This is especially true when cutting pollution is urgent, as with greenhouse gas emissions. A price floor also provides certainty and stability for investment in emissions reductions: recent experience from the UK shows that nuclear power operators are reluctant to invest on "un-subsidised" terms unless there is a guaranteed price floor for carbon (which the EU emissions trading scheme does not presently provide). Responsiveness to uncertainty: As with cost changes, in a world of uncertainty, it is not clear whether emissions fees or cap-and-trade systems are more efficient—it depends on how fast the marginal social benefits of reducing pollution fall with the amount of cleanup (e.g., whether inelastic or elastic marginal social benefit schedule). Other: The magnitude of the tax will depend on how sensitive the supply of emissions is to the price. The permit price of cap-and-trade will depend on the pollutant market. A tax generates government revenue, but full-auctioned emissions permits can do the same. A similar upstream cap-and-trade system could be implemented. An upstream carbon tax might be the simplest to administer. Setting up a complex cap-and-trade arrangement that is comprehensive has high institutional needs. Command-and-control regulation Command and control is a system of regulation that prescribes emission limits and compliance methods for each facility or source. It is the traditional approach to reducing air pollution.Command-and-control regulations are more rigid than incentive-based approaches such as pollution fees and cap and trade. An example of this is a performance standard which sets an emissions goal for each polluter that is fixed and, therefore, the burden of reducing pollution cannot be shifted to the firms that can achieve it more cheaply. As a result, performance standards are likely to be more costly overall. The additional costs would be passed to end consumers. Trading systems Apart from the dynamic development in carbon emission trading, other pollutants have also been targeted. United States Sulfur dioxide An early example of an emission trading system has been the sulfur dioxide (SO2) trading system under the framework of the Acid Rain Program of the 1990 Clean Air Act in the U.S. Under the program, which is essentially a cap-and-trade emissions trading system, SO2 emissions were reduced by 50% from 1980 levels by 2007. Some experts argue that the cap-and-trade system of SO2 emissions reduction has reduced the cost of controlling acid rain by as much as 80% versus source-by-source reduction. The SO2 program was challenged in 2004, which set in motion a series of events that led to the 2011 Cross-State Air Pollution Rule (CSAPR). Under the CSAPR, the national SO2 trading program was replaced by four separate trading groups for SO2 and NOx. SO2 emissions from Acid Rain Program sources have fallen from 17.3 million tons in 1980 to about 7.6 million tons in 2008, a decrease in emissions of 56 percent. A 2014 EPA analysis estimated that implementation of the Acid Rain Program avoided between 20,000 and 50,000 incidences of premature mortality annually due to reductions of ambient PM2.5 concentrations, and between 430 and 2,000 incidences annually due to reductions of ground-level ozone. Nitrogen oxides In 2003, the Environmental Protection Agency (EPA) began to administer the NOx Budget Trading Program (NBP) under the NOx State Implementation Plan (also known as the "NOx SIP Call"). The NOx Budget Trading Program was a market-based cap and trade program created to reduce emissions of nitrogen oxides (NOx) from power plants and other large combustion sources in the eastern United States. NOx is a prime ingredient in the formation of ground-level ozone (smog), a pervasive air pollution problem in many areas of the eastern United States. The NBP was designed to reduce NOx emissions during the warm summer months, referred to as the ozone season, when ground-level ozone concentrations are highest. In March 2008, EPA again strengthened the 8-hour ozone standard to 0.075 parts per million (ppm) from its previous 0.08 ppm.Ozone season NOx emissions decreased by 43 percent between 2003 and 2008, even while energy demand remained essentially flat during the same period. CAIR will result in $85 billion to $100 billion in health benefits and nearly $2 billion in visibility benefits per year by 2015 and will substantially reduce premature mortality in the eastern United States. NOx reductions due to the NOx Budget Trading Program have led to improvements in ozone and PM2.5, saving an estimated 580 to 1,800 lives in 2008.A 2017 study in the American Economic Review found that the NOx Budget Trading Program decreased NOx emissions and ambient ozone concentrations. The program reduced expenditures on medicine by about 1.5% ($800 million annually) and reduced the mortality rate by up to 0.5% (2,200 fewer premature deaths, mainly among individuals 75 and older). Volatile organic compounds In the United States the Environmental Protection Agency (EPA) classifies Volatile Organic Compounds (VOCs) as gases emitted from certain solids and liquids that may have adverse health effects. These VOCs include a variety of chemicals that are emitted from a variety of different products. These include products such as gasoline, perfumes, hair spray, fabric cleaners, PVC, and refrigerants; all of which can contain chemicals such as benzene, acetone, methylene chloride, freons, formaldehyde.VOCs are also monitored by the United States Geological Survey for its presence in groundwater supply. The USGS concluded that many of the nations aquifers are at risk to low-level VOC contamination. The common symptoms of short levels of exposure to VOCs include headaches, nausea, and eye irritation. If exposed for an extended period of time the symptoms include cancer and damage to the central nervous system. China In an effort to reverse the adverse consequences of air pollution, in 2006, China started to consider a national pollution permit trading system in order to use market-based mechanisms to incentivize companies to cut pollution. This has been based on a previous pilot project called the Industrial SO2 emission trading pilot scheme, which was launched in 2002. Four provinces, three municipalities and one state-owned enterprise were involved in this pilot project (also known as the 4+3+1 project).: 80  They are Shandong, Shanxi, Jiangsu, Henan, Shanghai, Tianjin, Liuzhou and China Huaneng Group, a state-owned company in the power industry.In 2014, when the Chinese government started considering a national level pollution permit trading system again, there were more than 20 local pollution permit trading platforms. The Yangtze River Delta region as a whole has also run test trading, but the scale was limited. In the same year, the Chinese government proposed establishing a carbon market, focused on CO2 reduction later in the decade, and it is a separate system from the pollution permit trading.Following these regional efforts, China established its national Emissions Trading System in 2017.: 28 A 2021 study in PNAS found that China's emissions trading system effectively reduced firm emissions despite low carbon prices and infrequent trading. The system reduced total emissions by 16.7% and emission intensity by 9.7%. Linked trading systems Distinct cap-and-trade systems can be linked together through the mutual or unilateral recognition of emissions allowances for compliance. Linking systems creates a larger carbon market, which can reduce overall compliance costs, increase market liquidity and generate a more stable carbon market. Linking systems can also be politically symbolic as it shows willingness to undertake a common effort to reduce GHG emissions. Some scholars have argued that linking may provide a starting point for developing a new, bottom-up international climate policy architecture, whereby multiple unique systems successively link their various systems.In 2014, the U.S. state of California (which is the world's fifth largest economy if it were a nation, between Germany and the United Kingdom in size) and the Canadian province of Québec successfully linked their systems. In 2015, the provinces of Ontario and Manitoba agreed to join the linked system between Quebec and California. On 22 September 2017, the premiers of Quebec and Ontario, and the Governor of California, signed the formal agreement establishing the linkage. Renewable energy certificates Renewable Energy Certificates (occasionally referred to as or "green tags"), are a largely unrelated form of market-based instruments that are used to achieve renewable energy targets, which may be environmentally motivated (like emissions reduction targets), but may also be motivated by other aims, such as energy security or industrial policy. Criticism Distributional effects The US Congressional Budget Office (CBO, 2009) examined the potential effects of the American Clean Energy and Security Act on US households. This act relies heavily on the free allocation of permits. The Bill was found to protect low-income consumers, but it was recommended that the Bill be made more efficient by reducing welfare provisions for corporations, and more resources be made available for consumer relief. A cap-and-trade initiative in the U.S. Northeast caused concerns it would be regressive and poorer households would absorb most of the new tax. See also References External links Greenhouse Gas Emissions Trading and Project-based Mechanisms – Organisation for Economic Co-operation and Development US EPA's Acid Rain Program
low-carbon power
Low-carbon power is electricity produced with substantially lower greenhouse gas emissions over the entire lifecycle than power generation using fossil fuels. The energy transition to low-carbon power is one of the most important actions required to limit climate change. Power sector emissions may have peaked in 2018. During the first six months of 2020, scientists observed an 8.8% decrease in global CO2 emissions relative to 2019 due to COVID-19 lockdown measures. The two main sources of the decrease in emissions included ground transportation (40%) and the power sector (22%). This event is the largest absolute decrease in CO2 emissions in history, but emphasizes that low-carbon power "must be based on structural and transformational changes in energy-production systems".Low carbon power generation sources include wind power, solar power, nuclear power and most hydropower. The term largely excludes conventional fossil fuel plant sources, and is only used to describe a particular subset of operating fossil fuel power systems, specifically, those that are successfully coupled with a flue gas carbon capture and storage (CCS) system. Globally almost 40% of electricity generation came from low-carbon sources in 2020: about 10% being nuclear power, almost 10% wind and solar, and around 20% hydropower and other renewables. History During the late 20th and early 21st century significant findings regarding global warming highlighted the need to curb carbon emissions. From this, the idea for low-carbon power was born. The Intergovernmental Panel on Climate Change (IPCC), established by the World Meteorological Organization (WMO) and the United Nations Environment Program (UNEP) in 1988, set the scientific precedence for the introduction of low-carbon power. The IPCC has continued to provide scientific, technical and socio-economic advice to the world community, through its periodic assessment reports and special reports.Internationally, the most prominent early step in the direction of low carbon power was the signing of the Kyoto Protocol, which came into force on 16 February 2005, under which most industrialized countries committed to reduce their carbon emissions. The historical event set the political precedence for introduction of low-carbon power technology. On a social level, perhaps the biggest factor contributing to the general public's awareness of climate change and the need for new technologies, including low carbon power, came from the documentary An Inconvenient Truth, which clarified and highlighted the problem of global warming. Power sources by greenhouse gas emissions Differentiating attributes of low-carbon power sources There are many options for lowering current levels of carbon emissions. Some options, such as wind power and solar power, produce low quantities of total life cycle carbon emissions, using entirely renewable sources. Other options, such as nuclear power, produce a comparable amount of carbon dioxide emissions as renewable technologies in total life cycle emissions, but consume non-renewable, but sustainable materials (uranium). The term low-carbon power can also include power that continues to utilize the world's natural resources, such as natural gas and coal, but only when they employ techniques that reduce carbon dioxide emissions from these sources when burning them for fuel, such as the, as of 2012, pilot plants performing Carbon capture and storage.Because the cost of reducing emissions in the electricity sector appears to be lower than in other sectors such as transportation, the electricity sector may deliver the largest proportional carbon reductions under an economically efficient climate policy.Technologies to produce electric power with low-carbon emissions are in use at various scales. Together, they accounted for almost 40% of global electricity in 2020, with wind and solar almost 10%. Technologies The 2014 Intergovernmental Panel on Climate Change report identifies nuclear, wind, solar and hydroelectricity in suitable locations as technologies that can provide electricity with less than 5% of the lifecycle greenhouse gas emissions of coal power. Hydroelectric power Hydroelectric plants have the advantage of being long-lived and many existing plants have operated for more than 100 years. Hydropower is also an extremely flexible technology from the perspective of power grid operation. Large hydropower provides one of the lowest cost options in today's energy market, even compared to fossil fuels and there are no harmful emissions associated with plant operation. However, there are typically low greenhouse gas emissions with reservoirs, and possibly high emissions in the tropics. Hydroelectric power is the world's largest low carbon source of electricity, supplying 15.6% of total electricity in 2019. China is by far the world's largest producer of hydroelectricity in the world, followed by Brazil and Canada. However, there are several significant social and environmental disadvantages of large-scale hydroelectric power systems: dislocation, if people are living where the reservoirs are planned, release of significant amounts of carbon dioxide and methane during construction and flooding of the reservoir, and disruption of aquatic ecosystems and birdlife. There is a strong consensus now that countries should adopt an integrated approach towards managing water resources, which would involve planning hydropower development in co-operation with other water-using sectors. Nuclear power Nuclear power, with a 10.6% share of world electricity production as of 2013, is the second largest low-carbon power source.Nuclear power, in 2010, also provided two thirds of the twenty seven nation European Union's low-carbon energy, with some EU nations sourcing a large fraction of their electricity from nuclear power; for example France derives 79% of its electricity from nuclear. As of 2020 nuclear power provided 47% low-carbon power in the EU with countries largely based on nuclear power routinely achieving carbon intensity of 30-60 gCO2eq/kWh.According to the IAEA and the European Nuclear Society, worldwide there were 68 civil nuclear power reactors under construction in 15 countries in 2013. China has 29 of these nuclear power reactors under construction, as of 2013, with plans to build many more, while in the US the licenses of almost half its reactors have been extended to 60 years, and plans to build another dozen are under serious consideration. There is also a considerable number of new reactors being built in South Korea, India, and Russia. Nuclear power's capability to add significantly to future low carbon energy growth depends on several factors, including the economics of new reactor designs, such as Generation III reactors, public opinion and national and regional politics. The 104 U.S. nuclear plants are undergoing a Light Water Reactor Sustainability Program, to sustainably extend the life span of the U.S. nuclear fleet by a further 20 years. With further US power plants under construction in 2013, such as the two AP1000s at Vogtle Electric Generating Plant. However the Economics of new nuclear power plants are still evolving and plans to add to those plants are mostly in flux.In 2021 United Nations Economic Commission for Europe (UNECE) described nuclear power as important tool to mitigate climate change that has prevented 74 Gt of CO2 emissions over the last half century, providing 20% of energy in Europe and 43% of low-carbon energy. Wind power Solar power Solar power is the conversion of sunlight into electricity, either directly using photovoltaics (PV), or indirectly using concentrated solar power (CSP). Concentrated solar power systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. Photovoltaics convert light into electric current using the photoelectric effect.Commercial concentrated solar power plants were first developed in the 1980s. The 354 MW SEGS CSP installation is the largest solar power plant in the world, located in the Mojave Desert of California. Other large CSP plants include the Solnova Solar Power Station (150 MW) and the Andasol solar power station (150 MW), both in Spain. The over 200 MW Agua Caliente Solar Project in the United States, and the 214 MW Charanka Solar Park in India, are the world's largest photovoltaic plants. Solar power's share of worldwide electricity usage at the end of 2014 was 1%. Geothermal power Geothermal electricity is electricity generated from geothermal energy. Technologies in use include dry steam power plants, flash steam power plants and binary cycle power plants. Geothermal electricity generation is used in 24 countries while geothermal heating is in use in 70 countries.Current worldwide installed capacity is 10,715 megawatts (MW), with the largest capacity in the United States (3,086 MW), Philippines, and Indonesia. Estimates of the electricity generating potential of geothermal energy vary from 35 to 2000 GW.Geothermal power is considered to be sustainable because the heat extraction is small compared to the Earth's heat content. The emission intensity of existing geothermal electric plants is on average 122 kg of CO2 per megawatt-hour (MW·h) of electricity, a small fraction of that of conventional fossil fuel plants. Tidal power Tidal power is a form of hydropower that converts the energy of tides into electricity or other useful forms of power. The first large-scale tidal power plant (the Rance Tidal Power Station) started operation in 1966. Although not yet widely used, tidal power has potential for future electricity generation. Tides are more predictable than wind energy and solar power. Carbon capture and storage Carbon capture and storage captures carbon dioxide from the flue gas of power plants or other industry, transporting it to an appropriate location where it can be buried securely in an underground reservoir. While the technologies involved are all in use, and carbon capture and storage is occurring in other industries (e.g., at the Sleipner gas field), no large scale integrated project has yet become operational within the power industry. Improvements to current carbon capture and storage technologies could reduce CO2 capture costs by at least 20-30% over approximately the next decade, while new technologies under development promise more substantial cost reduction. Outlook and requirements Emissions The Intergovernmental Panel on Climate Change stated in its first working group report that "most of the observed increase in globally averaged temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations, contribute to climate change.As a percentage of all anthropogenic greenhouse gas emissions, carbon dioxide (CO2) accounts for 72 percent (see Greenhouse gas), and has increased in concentration in the atmosphere from 315 parts per million (ppm) in 1958 to more than 375 ppm in 2005.Emissions from energy make up more than 61.4 percent of all greenhouse gas emissions. Power generation from traditional coal fuel sources accounts for 18.8 percent of all world greenhouse gas emissions, nearly double that emitted by road transportation.Estimates state that by 2020 the world will be producing around twice as much carbon emissions as it was in 2000.The European Union hopes to sign a law mandating net-zero greenhouse gas emissions in the coming year for all 27 countries in the union. Electricity usage World energy consumption is predicted to increase from 123,000 TWh (421 quadrillion BTU) in 2003 to 212,000 TWh (722 quadrillion BTU) in 2030. Coal consumption is predicted to nearly double in that same time. The fastest growth is seen in non-OECD Asian countries, especially China and India, where economic growth drives increased energy use. By implementing low-carbon power options, world electricity demand could continue to grow while maintaining stable carbon emission levels. In the transportation sector there are moves away from fossil fuels and towards electric vehicles, such as mass transit and the electric car. These trends are small, but may eventually add a large demand to the electrical grid.Domestic and industrial heat and hot water have largely been supplied by burning fossil fuels such as fuel oil or natural gas at the consumers' premises. Some countries have begun heat pump rebates to encourage switching to electricity, potentially adding a large demand to the grid. Energy infrastructure Coal-fired power plants are losing market share compared to low carbon power, and any built in the 2020s risk becoming stranded assets or stranded costs, partly because their capacity factors will decline. Investment Investment in low-carbon power sources and technologies is increasing at a rapid rate. Zero-carbon power sources produce about 2% of the world's energy, but account for about 18% of world investment in power generation, attracting $100 billion of investment capital in 2006. See also Carbon capture and storage Carbon sink Climate Change Emissions trading Energy development Energy portal Global warming Greenhouse gases List of renewable energy organizations Renewable energy commercialization == References ==
streaming media
Streaming media is multimedia for playback using an offline or online media player. Technically, the stream is delivered and consumed in a continuous manner from a client, with little or no intermediate storage in network elements. Streaming refers to the delivery method of content, rather than the content itself. Distinguishing delivery method from the media applies specifically to telecommunications networks, as most of the traditional media delivery systems are either inherently streaming (e.g. radio, television) or inherently non-streaming (e.g. books, videotapes, audio CDs). There are challenges with streaming content on the Internet. For example, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or poor buffering of the content, and users lacking compatible hardware or software systems may be unable to stream certain content. With the use of buffering of the content for just a few seconds in advance of playback, the quality can be much improved. Livestreaming is the real-time delivery of content during production, much as live television broadcasts content via television channels. Livestreaming requires a form of source media (e.g. a video camera, an audio interface, screen capture software), an encoder to digitize the content, a media publisher, and a content delivery network to distribute and deliver the content. Streaming is an alternative to file downloading, a process in which the end-user obtains the entire file for the content before watching or listening to it. Through streaming, an end-user can use their media player to start playing digital video or digital audio content before the entire file has been transmitted. The term "streaming media" can apply to media other than video and audio, such as live closed captioning, ticker tape, and real-time text, which are all considered "streaming text". Streaming is most prevalent in video on demand and streaming television services. Other services stream music or video games. Etymology The term "streaming" was first used for tape drives manufactured by Data Electronics Inc. that were meant to slowly ramp up and run for the entire track; slower ramp times lowered drive costs. "Streaming" was applied in the early 1990s as a better description for video on demand and later live video on IP networks. It was first done by Starlight Networks for video streaming and Real Networks for audio streaming. Such video had previously been referred to by the misnomer "store and forward video." Precursors Beginning in 1881, Théâtrophone enabled subscribers to listen to opera and theatre performances over telephone lines. This operated until 1932. The concept of media streaming eventually came to America.In the early 1920s, George Owen Squier was granted patents for a system for the transmission and distribution of signals over electrical lines, which was the technical basis for what later became Muzak, a technology streaming continuous music to commercial customers without the use of radio. The Telephone Music Service, a live jukebox service, began in 1929 and continued until 1997. The clientele eventually included 120 bars and restaurants in the Pittsburgh area. A tavern customer would deposit money in the jukebox, use a telephone on top of the jukebox, and ask the operator to play a song. The operator would find the record in the studio library of more than 100,000 records, put it on a turntable, and the music would be piped over the telephone line to play in the tavern. The music media began as 78s, 33s and 45s, played on the six turntables they monitored. CDs and tapes were incorporated in later years. The business had a succession of owners, notably Bill Purse, his daughter Helen Reutzel, and finally, Dotti White. The revenue stream of each quarter was split 60% to the music service and 40% to the tavern owner. This business model eventually became unsustainable due to city permits and the cost of setting up these telephone lines. History Early development Attempts to display media on computers date back to the earliest days of computing in the mid-20th century. However, little progress was made for several decades, primarily due to the high cost and limited capabilities of computer hardware. From the late 1980s through the 1990s, consumer-grade personal computers became powerful enough to display various media. The primary technical issues related to streaming were having enough CPU and bus bandwidth to support the required data rates, achieving real-time computing performance required to prevent buffer underrun and enable smooth streaming of the content. However, computer networks were still limited in the mid-1990s, and audio and video media were usually delivered over non-streaming channels, such as playback from a local hard disk drive or CD-ROMs on the end user's computer. In 1990 the first commercial Ethernet switch was introduced by Kalpana, which enabled the more powerful computer networks that led to the first streaming video solutions used by schools and corporations. Practical streaming media was only made possible with advances in data compression, due to the impractically high bandwidth requirements of uncompressed media. Raw digital audio encoded with pulse-code modulation (PCM) requires a bandwidth of 1.4 Mbit/s for uncompressed CD audio, while raw digital video requires a bandwidth of 168 Mbit/s for SD video and over 1000 Mbit/s for FHD video. Late 1990s to early 2000s During the late 1990s and early 2000s, users had increased access to computer networks, especially the Internet. During the early 2000s, users had access to increased network bandwidth, especially in the last mile. These technological improvements facilitated the streaming of audio and video content to computer users in their homes and workplaces. There was also an increasing use of standard protocols and formats, such as TCP/IP, HTTP, HTML as the Internet became increasingly commercialized, which led to an infusion of investment into the sector. The band Severe Tire Damage was the first group to perform live on the Internet. On June 24, 1993, the band was playing a gig at Xerox PARC while elsewhere in the building, scientists were discussing new technology (the Mbone) for broadcasting on the Internet using multicasting. As proof of PARC's technology, the band's performance was broadcast and could be seen live in Australia and elsewhere. In a March 2017 interview, band member Russ Haines stated that the band had used approximately "half of the total bandwidth of the internet" to stream the performance, which was a 152 × 76 pixel video, updated eight to twelve times per second, with audio quality that was, "at best, a bad telephone connection." In October 1994, a school music festival was webcast from the Michael Fowler Centre in Wellington, New Zealand. The technician who arranged the webcast, local council employee Richard Naylor, later commented: "We had 16 viewers in 12 countries."RealNetworks pioneered the broadcast of a baseball game between the New York Yankees and the Seattle Mariners over the Internet in 1995. The first symphonic concert on the Internet—a collaboration between the Seattle Symphony and guest musicians Slash, Matt Cameron, and Barrett Martin—took place at the Paramount Theater in Seattle, Washington, on November 10, 1995.In 1996, Marc Scarpa produced the first large-scale, online, live broadcast, the Adam Yauch-led Tibetan Freedom Concert, an event that would define the format of social change broadcasts. Scarpa continued to pioneer in the streaming media world with projects such as Woodstock '99, Townhall with President Clinton, and more recently Covered CA's campaign "Tell a Friend Get Covered" which was live streamed on YouTube. Business developments Xing Technology was founded in 1989, and developed a JPEG streaming product called "StreamWorks". Another streaming product appeared in late 1992 and was named StarWorks. StarWorks enabled on-demand MPEG-1 full-motion videos to be randomly accessed on corporate Ethernet networks. Starworks was from Starlight Networks, who also pioneered live video streaming on Ethernet and via Internet Protocol over satellites with Hughes Network Systems. Other early companies that created streaming media technology include Progressive Networks and Protocomm prior to widespread World Wide Web usage. After the Netscape IPO in 1995 (and the release of Windows 95, with built-in TCP/IP support), usage of the Internet expanded, and many companies "went public", including Progressive Networks (which was renamed "RealNetworks", and listed on Nasdaq as "RNWK"). As the web became even more popular in the late 90s, streaming video on the internet blossomed from startups such as Vivo Software (later acquired by RealNetworks), VDOnet (acquired by RealNetworks), Precept (acquired by Cisco), and Xing (acquired by RealNetworks).Microsoft developed a media player known as ActiveMovie in 1995 that supported streaming media and included a proprietary streaming format, which was the precursor to the streaming feature later in Windows Media Player 6.4 in 1999. In June 1999 Apple also introduced a streaming media format in its QuickTime 4 application. It was later also widely adopted on websites along with RealPlayer and Windows Media streaming formats. The competing formats on websites required each user to download the respective applications for streaming and resulted in many users having to have all three applications on their computer for general compatibility. In 2000 Industryview.com launched its "world's largest streaming video archive" website to help businesses promote themselves. Webcasting became an emerging tool for business marketing and advertising that combined the immersive nature of television with the interactivity of the Web. The ability to collect data and feedback from potential customers caused this technology to gain momentum quickly.Around 2002, the interest in a single, unified, streaming format and the widespread adoption of Adobe Flash prompted the development of a video streaming format through Flash, which was the format used in Flash-based players on video hosting sites. The first popular video streaming site, YouTube, was founded by Steve Chen, Chad Hurley and Jawed Karim in 2005. It initially used a Flash-based player, which played MPEG-4 AVC video and AAC audio, but now defaults to HTML5 video. Increasing consumer demand for live streaming prompted YouTube to implement a new live streaming service to users. The company currently also offers a (secured) link returning the available connection speed of the user.The Recording Industry Association of America (RIAA) revealed through its 2015 earnings report that streaming services were responsible for 34.3 percent of the year's total music industry's revenue, growing 29 percent from the previous year and becoming the largest source of income, pulling in around $2.4 billion. US streaming revenue grew 57 percent to $1.6 billion in the first half of 2016 and accounted for almost half of industry sales. Streaming wars The term streaming wars was coined to discuss the new era (starting in 2019) of competition between video streaming services such as Netflix, Amazon Prime Video, Hulu, Max, Disney+, Paramount+, Apple TV+, and Peacock.Competition among online platforms has forced them to find ways to differentiate themselves. One key way they have done this is by offering exclusive content, often self-produced and created specifically for a market. This approach to streaming competition can have disadvantages for consumers and the industry as a whole. Once content is made available online, the corresponding piracy searches decrease. Competition or legal availability across multiple platforms effectively deters online piracy, and more exclusivity does not necessarily translate into higher average investment in content because investment decisions are also dependent on the level and type of competition in online markets.This competition was increased during the first two years of the COVID-19 pandemic as more people stayed home and watched TV. "The COVID-19 pandemic has led to a seismic shift in the film & TV industry in terms of how films are made, distributed and screened. Many industries have been hit by the economic affect of the pandemic" (Totaro Donato). In August 2022, a CNN headline declared "The streaming wars are over" as pandemic-era restrictions had largely ended and audience growth had stalled. This lead services to focus on profit over market share by cutting production budgets, cracking down on password sharing, and introducing ad-supported tiers. A December 2022 article in The Verge echoed this, declaring an end to the "golden age of the streaming wars".In September 2023, several streaming services formed a trade association named the Streaming Innovation Alliance (SIA), spearheaded by Charles Rivkin of the Motion Picture Association (MPA). Former U.S. representative Fred Upton and former Federal Communications Commission (FCC) acting chair Mignon Clyburn serve as senior advisors. Founding members include AfroLandTV, America Nu Network, BET+, Discovery+, Disney+, Disney+ Hotstar, ESPN+, For Us By Us Network, Hulu, Max, the MPA, MotorTrend+, Netflix, Paramount+, Peacock, Pluto TV, Star+, Telemundo, TelevisaUnivision, Vault TV, and Vix. Notably absent were Apple, Amazon, Roku, and Tubi. Use by the general public Advances in computer networking, combined with powerful home computers and operating systems made streaming media affordable and easy for the public. Stand-alone Internet radio devices emerged to offer listeners a non-technical option for listening to audio streams. These audio-streaming services became increasingly popular; streaming music reached 118.1 billion streams in 2013. In general, multimedia content is data intensive, so media storage and transmission costs are still significant. Media is generally compressed for transport and storage. Increasing consumer demand for streaming of high-definition (HD) content has led the industry to develop technologies such as WirelessHD and G.hn, which are optimized for streaming HD content. Many developers have introduced HD streaming apps that work on smaller devices such as tablets and smartphones for everyday purposes. A media stream can be streamed either live or on demand. Live streams are generally provided by a means called true streaming. True streaming sends the information straight to the computer or device without saving to a local file. On-demand streaming is provided by a means called progressive download. Progressive download saves the received information to a local file and then is played from that location. On-demand streams are often saved to files for extended amounts of time; while the live streams are only available at one time only (e.g. during the football game).Streaming media is increasingly being coupled with use of social media. For example, sites such as YouTube encourage social interaction in webcasts through features such as live chat, online surveys, user posting of comments online and more. Furthermore, streaming media is increasingly being used for social business and e-learning.The Horowitz Research State of Pay TV, OTT and SVOD 2017 report said that 70 percent of those viewing content did so through a streaming service and that 40 percent of TV viewing was done this way, twice the number from five years earlier. Millennials, the report said, streamed 60 percent of content. Transition from DVD One of the movie streaming industry's largest impacts was on the DVD industry, which drastically dropped in popularity and profitability with the mass popularization of online content. The rise of media streaming caused the downfall of many DVD rental companies such as Blockbuster. In July 2015, The New York Times published an article about Netflix's DVD services. It stated that Netflix was continuing their DVD services with 5.3 million subscribers, which was a significant drop from the previous year. On the other hand, their streaming services had 65 million members. Napster Music streaming is one of the most popular ways in which consumers interact with streaming media. In the age of digitization, the private consumption of music transformed into a public good largely due to one player in the market: Napster. Napster, a peer-to-peer (P2P) file-sharing network where users could upload and download MP3 files freely, broke all music industry conventions when it launched in early 1999 in Hull, Massachusetts. The platform was developed by Shawn and John Fanning as well as Sean Parker. In an interview from 2009, Shawn Fanning explained that Napster "was something that came to me as a result of seeing a sort of an unmet need and the passion people had for being able to find all this music, particularly a lot of the obscure stuff which wouldn't be something you go to a record store and purchase, so it felt like a problem worth solving."Not only did this development disrupt the music industry by making songs that previously required payment to be freely accessible to any Napster user, but it also demonstrated the power of P2P networks in turning any digital file into a public, shareable good. For the brief period of time that Napster existed, mp3 files fundamentally changed as a type of good. Songs were no longer financially excludable – barring access to a computer with internet access – and they were not rival, meaning if one person downloaded a song it did not diminish another user from doing the same. Napster, like most other providers of public goods, faced the free-rider problem. Every user benefits when an individual uploads an mp3 file, but there is no requirement or mechanism that forces all users to share their music. Generally, the platform encouraged sharing; users who downloaded files from others often had their own files available for upload as well. However, not everyone chose to share their files. There was not a built-in incentive specifically discouraging users from sharing their own files.This structure revolutionized the consumer's perception of ownership over digital goods – it made music freely replicable. Napster quickly garnered millions of users, growing faster than any other business in history. At the peak of its existence, Napster boasted about 80 million users globally. The site gained so much traffic that many college campuses had to block access to Napster because it created network congestion from so many students sharing music files.The advent of Napster sparked the creation of numerous other P2P sites including LimeWire (2000), BitTorrent (2001), and the Pirate Bay (2003). The reign of P2P networks was short-lived. The first to fall was Napster in 2001. Numerous lawsuits were filed against Napster by various record labels, all of which were subsidiaries of Universal Music Group, Sony Music Entertainment, Warner Music Group, or EMI. In addition to this, the Recording Industry Association of America (RIAA) also filed a lawsuit against Napster on the grounds of unauthorized distribution of copyrighted material, which ultimately led Napster to shut down in 2001. In an interview with the New York Times, Gary Stiffelman, who represents Eminem, Aerosmith, and TLC, explained, "I'm not an opponent of artists' music being included in these services, I'm just an opponent of their revenue not being shared." The fight for intellectual property rights: A&M Records, Inc. v. Napster, Inc. The lawsuit A&M Records, Inc. v. Napster, Inc. fundamentally changed the way consumers interact with music streaming. It was argued on 2 October 2000 and was decided on 12 February 2001. The Court of Appeals for the Ninth Circuit ruled that a P2P file-sharing service could be held liable for contributory and vicarious infringement of copyright, serving as a landmark decision for Intellectual property law.The first issue that the Court addressed was fair use, which says that otherwise infringing activities are permissible so long as it is for purposes "such as criticism, comment, news reporting, teaching [...] scholarship, or research." Judge Beezer, the judge for this case, noted that Napster claimed that its services fit "three specific alleged fair uses: sampling, where users make temporary copies of a work before purchasing; space-shifting, where users access a sound recording through the Napster system that they already own in audio CD format; and permissive distribution of recordings by both new and established artists." Judge Beezer found that Napster did not fit these criteria, instead enabling their users to repeatedly copy music, which would affect the market value of the copyrighted good. The second claim by the plaintiffs was that Napster was actively contributing to copyright infringement since it had knowledge of widespread file sharing on its platform. Since Napster took no action to reduce infringement and financially benefited from repeated use, the court ruled against the P2P site. The court found that "as much as eighty-seven percent of the files available on Napster may be copyrighted and more than seventy percent may be owned or administered by plaintiffs."The injunction ordered against Napster ended the brief period in which music streaming was a public good – non-rival and non-excludable in nature. Other P2P networks had some success at sharing MP3s, though they all met a similar fate in court. The ruling set the precedent that copyrighted digital content cannot be freely replicated and shared unless given consent by the owner, thereby strengthening the property rights of artists and record labels alike. Music streaming platforms Although music streaming is no longer a freely replicable public good, streaming platforms such as Spotify, Deezer, Apple Music, SoundCloud, YouTube Music, and Amazon Music have shifted music streaming to a club-type good. While some platforms, most notably Spotify, give customers access to a freemium service that enables the use of limited features for exposure to advertisements, most companies operate under a premium subscription model. Under such circumstances, music streaming is financially excludable, requiring that customers pay a monthly fee for access to a music library, but non-rival, since one customer's use does not impair another's. There is competition between services similar but lesser to the streaming wars for video media. As of 2019 Spotify has over 207 million users in 78 countries, As of 2018 Apple Music has about 60 million, and SoundCloud has 175 million. All platforms provide varying degrees of accessibility. Apple Music and Prime Music only offer their services for paid subscribers, whereas Spotify and SoundCloud offer freemium and premium services. Napster, owned by Rhapsody since 2011, has resurfaced as a music streaming platform offering subscription-based services to over 4.5 million users as of January 2017.The music industry's response to music streaming was initially negative. Along with music piracy, streaming services disrupted the market and contributed to the fall in US revenue from $14.6 billion in 1999 to $6.3 billion in 2009. CDs and single-track downloads were not selling because content was freely available on the Internet. By 2018, however, music streaming revenue exceeded that of traditional revenue streams (e.g. record sales, album sales, downloads). Streaming revenue is now one of the largest driving forces behind the growth in the music industry. In an interview, Jonathan Dworkin, a senior vice president of strategy and business development at Universal, said, "We cannot be afraid of perpetual change, because that dynamism is driving growth." COVID-19 pandemic By August 2020, the COVID-19 pandemic had streaming services busier than ever. In the UK alone, twelve million people joined a new streaming service that they had not previously had.An impact analysis of 2020 data by the International Confederation of Societies of Authors and Composers (CISAC) indicated that remuneration from digital streaming of music increased with a strong rise in digital royalty collection (up 16.6% to EUR 2.4 billion), but it would not compensate the overall loss of income of authors from concerts, public performance and broadcast. The International Federation of the Phonographic Industry (IFPI) recompiled the music industry initiatives around the world related to the COVID-19. In its State of the Industry report, it recorded that the global recorded music market grew by 7.4% in 2022, the 6th consecutive year of growth. This growth was driven by streaming, mostly from paid subscription streaming revenues which increased by 18.5%, fueled by 443 million users of subscription accounts by the end of 2020.The COVID-19 pandemic has also driven an increase in misinformation and disinformation, particularly on streaming platforms like YouTube and podcasts. Local/home streaming Streaming also refers to the offline streaming of multimedia at home. This is made possible by technologies such as DLNA, which allow devices on the same local network to connect to each other and share media. Technologies Bandwidth A broadband speed of 2 Mbit/s or more is recommended for streaming standard-definition video, for example to a Roku, Apple TV, Google TV or a Sony TV Blu-ray Disc Player. 5 Mbit/s is recommended for high-definition content and 9 Mbit/s for ultra-high-definition content. Streaming media storage size is calculated from the streaming bandwidth and length of the media using the following formula (for a single user and file): storage size in megabytes is equal to length (in seconds) × bit rate (in bit/s) / (8 × 1024 × 1024). For example, one hour of digital video encoded at 300 kbit/s (this was a typical broadband video in 2005 and it was usually encoded in 320 × 240 resolution) will be: (3,600 s × 300,000 bit/s) / (8 × 1024 × 1024) requires around 128 MB of storage. If the file is stored on a server for on-demand streaming and this stream is viewed by 1,000 people at the same time using a Unicast protocol, the requirement is 300 kbit/s × 1,000 = 300,000 kbit/s = 300 Mbit/s of bandwidth. This is equivalent to around 135 GB per hour. Using a multicast protocol the server sends out only a single stream that is common to all users. Therefore, such a stream would only use 300 kbit/s of server bandwidth. In 2018 video was more than 60% of data traffic worldwide and accounted for 80% of growth in data usage. Protocols Video and audio streams are compressed to make the file size smaller. Audio coding formats include MP3, Vorbis, AAC and Opus. Video coding formats include H.264, HEVC, VP8 and VP9. Encoded audio and video streams are assembled in a container bitstream such as MP4, FLV, WebM, ASF or ISMA. The bitstream is delivered from a streaming server to a streaming client (e.g., the computer user with their Internet-connected laptop) using a transport protocol, such as Adobe's RTMP or RTP. In the 2010s, technologies such as Apple's HLS, Microsoft's Smooth Streaming, Adobe's HDS and non-proprietary formats such as MPEG-DASH emerged to enable adaptive bitrate streaming over HTTP as an alternative to using proprietary transport protocols. Often, a streaming transport protocol is used to send video from an event venue to a cloud transcoding service and content delivery network, which then uses HTTP-based transport protocols to distribute the video to individual homes and users. The streaming client (the end user) may interact with the streaming server using a control protocol, such as MMS or RTSP. The quality of the interaction between servers and users is based on the workload of the streaming service; as more users attempt to access a service the quality may be affected by resource constraints in the service. Deploying clusters of streaming servers is one such method where there are regional servers spread across the network, managed by a singular, central server containing copies of all the media files as well as the IP addresses of the regional servers. This central server then uses load balancing and scheduling algorithms to redirect users to nearby regional servers capable of accommodating them. This approach also allows the central server to provide streaming data to both users as well as regional servers using FFMpeg libraries if required, thus demanding the central server to have powerful data processing and immense storage capabilities. In return, workloads on the streaming backbone network are balanced and alleviated, allowing for optimal streaming quality.Designing a network protocol to support streaming media raises many problems. Datagram protocols, such as the User Datagram Protocol (UDP), send the media stream as a series of small packets. This is simple and efficient; however, there is no mechanism within the protocol to guarantee delivery. It is up to the receiving application to detect loss or corruption and recover data using error correction techniques. If data is lost, the stream may suffer a dropout. The Real Time Streaming Protocol (RTSP), Real-time Transport Protocol (RTP) and the Real-time Transport Control Protocol (RTCP) were specifically designed to stream media over networks. RTSP runs over a variety of transport protocols, while the latter two are built on top of UDP. HTTP adaptive bitrate streaming is based on HTTP progressive download, but contrary to the previous approach, here the files are very small, so that they can be compared to the streaming of packets, much like the case of using RTSP and RTP. Reliable protocols, such as the Transmission Control Protocol (TCP), guarantee correct delivery of each bit in the media stream. It means, however, that when there is data loss on the network, the media stream stalls while the protocol handlers detect the loss and retransmit the missing data. Clients can minimize this effect by buffering data for display. While delay due to buffering is acceptable in video-on-demand scenarios, users of interactive applications such as video conferencing will experience a loss of fidelity if the delay caused by buffering exceeds 200 ms. Unicast protocols send a separate copy of the media stream from the server to each recipient. Unicast is the norm for most Internet connections but does not scale well when many users want to view the same television program concurrently. Multicast protocols were developed to reduce server and network loads resulting from duplicate data streams that occur when many recipients receive unicast content streams independently. These protocols send a single stream from the source to a group of recipients. Depending on the network infrastructure and type, multicast transmission may or may not be feasible. One potential disadvantage of multicasting is the loss of video on demand functionality. Continuous streaming of radio or television material usually precludes the recipient's ability to control playback. However, this problem can be mitigated by elements such as caching servers, digital set-top boxes, and buffered media players. IP multicast provides a means to send a single media stream to a group of recipients on a computer network. A connection management protocol, usually Internet Group Management Protocol, is used to manage the delivery of multicast streams to the groups of recipients on a LAN. One of the challenges in deploying IP multicast is that routers and firewalls between LANs must allow the passage of packets destined to multicast groups. If the organization that is serving the content has control over the network between server and recipients (i.e., educational, government, and corporate intranets), then routing protocols such as Protocol Independent Multicast can be used to deliver stream content to multiple local area network segments. Peer-to-peer (P2P) protocols arrange for prerecorded streams to be sent between computers. This prevents the server and its network connections from becoming a bottleneck. However, it raises technical, performance, security, quality, and business issues. Content delivery networks (CDNs) use intermediate servers to distribute the load. Internet-compatible unicast delivery is used between CDN nodes and streaming destinations. Recording Media that is livestreamed can be recorded through certain media players such as VLC player, or through the use of a screen recorder. Live-streaming platforms such as Twitch may also incorporate a video on demand system that allows automatic recording of live broadcasts so that they can be watched later. YouTube also has recordings of live broadcasts, including television shows aired on major networks. These streams have the potential to be recorded by anyone who has access to them, whether legally or otherwise. View recommendation Most streaming services feature a recommender system for viewing based on each user's view history in conjunction with all viewers' aggregated view histories. Rather than focusing on subjective categorization of content by content curators), there is an assumption that, with the immensity of data collected on viewing habits, the choices of those who are first to view content can be algorithmically extrapolated to the totality of the user base, with increasing probabilistic accuracy as to the likelihood of their choosing and enjoying the recommended content as more data is collected. Applications and marketing Useful – and typical – applications of streaming are, for example, long video lectures performed online. An advantage of this presentation is that these lectures can be very long, although they can always be interrupted or repeated at arbitrary places. There are also new marketing concepts. For example, the Berlin Philharmonic Orchestra sells Internet live streams of whole concerts, instead of several CDs or similar fixed media, by their Digital Concert Hall using YouTube for trailers. These online concerts are also spread over a lot of different places – cinemas – at various places on the globe. A similar concept is used by the Metropolitan Opera in New York. There also is a livestream from the International Space Station. In video entertainment, video streaming platforms like Netflix, Hulu, and Disney+ are mainstream elements of the media industry.Marketers have found many opportunities offered by streaming media and the platforms that offer them, especially in light of the significant increase in the use of streaming media during COVID lockdowns from 2020 onwards. While revenue and placement traditional advertising continues to decrease, digital marketing increased in 15% in 2021, with digital media and search representing 65% of the expenditures. A case study commissioned by the WIPO indicates that streaming services attract advertising budgets with the opportunities provided with interactivity and the use of data from users, resulting in personalization on a mass scale with content marketing. Targeted marketing is expanding with the use of artificial intelligence, in particular programmatic advertisement, a tool that helps advertisers decide their campaign parameters, and whether they are interested in buying advertising space online or not. One example of advertising space acquisition is Real-Time Bidding (RTB). Challenges Copyright issues The availability of large bandwidth internet enabled the audiovisual streaming services to attract large number of users around the world. For OTT platforms, original content represents a critical variable in order to capture more subscribers. This generated a number of effects related to the copyright over the audiovisual content and its international exploitation through streaming such as contractual practices, international exploitation of rights, widespread use of standards and metadata in digital files. The WIPO has indicated the several basic copyright issues arising for those pursuing to work in the film and music industry in the era of streaming. Streaming copyrighted content can involve making infringing copies of the works in question. The recording and distribution of streamed content is also an issue for many companies that rely on revenue based on views or attendance. Greenhouse gas emissions The net greenhouse gas emissions from streaming music were estimated at between 0.2 and 0.35 million metric tons CO2eq (between 200,000 and 340,000 long tons; 220,000 and 390,000 short tons) per year in the United States, by a 2019 study. This was an increase from emissions in the pre-digital music period, which were estimated at "0.14 million metric tons (140,000 long tons; 150,000 short tons) in 1977, 0.136 million (134,000 long tons; 150,000 short tons) in 1988, and 0.157 million (155,000 long tons; 173,000 short tons) in 2000." However this is far less than other everyday activities such as eating, for example greenhouse gas emissions in the United States from beef cattle (burping of ruminants only - not including their manure) were 129 million metric tons (127 million long tons; 142 million short tons) in 2019.A 2021 study claimed that, based on the amount of data transmitted, one hour of streaming or videoconferencing "emits 150–1,000 grams (5–35 oz) of carbon dioxide ... requires 2–12 liters (0.4–2.6 imp gal; 0.5–3.2 U.S. gal) of water and demands a land area adding up to about the size of an iPad Mini." The study suggests that turning the camera off during video calls can reduce the greenhouse gas and water use footprints by 96%, and that an 86% reduction is possible by using standard definition rather than high definition when streaming content with apps such as Netflix or Hulu. However another study estimated a relatively low amount of 36 grams per hour (1.3 ounces per hour), and concluded that watching a Netflix video for half an hour emitted only the same as driving a gasoline fuelled car for about 100 meters (330 ft), so not a significant amount.One way to decrease greenhouse gas emissions associated with streaming music is making data centers carbon neutral, by converting to electricity produced from renewable sources. On an individual level, purchase of a physical CD may be more environmentally friendly if it is to be played more than 27 times. Another option for reducing energy use can be downloading the music for offline listening, to reduce the need for streaming over distance. The Spotify service has a built-in local cache to reduce the necessity of repeating song streams. See also References Further reading Hagen, Anja Nylund (2020). Music in Streams: Communicating Music in the Streaming Paradigm, In Michael Filimowicz & Veronika Tzankova (ed.), Reimagining Communication: Mediation (1st Edition). Routledge. Preston, J. (11 December 2011). "Occupy Video Showcases Live Streaming". The New York Times. Sherman, Alex (27 October 2019). "AT&T, Disney and Comcast have very different plans for the streaming wars – here's what they're doing and why". CNBC. External links "The Early History of the Streaming Media Industry and The Battle Between Microsoft & Real". streamingmedia.com. March 2016. Archived from the original on 21 March 2016. Retrieved 25 March 2016. "What is Streaming? A high-level view of streaming media technology, history". streamingmedia.com. Retrieved 25 March 2016.
energy in indonesia
In 2019, the total energy production in Indonesia is 450.79 Mtoe, with a total primary energy supply is 231.14 Mtoe and electricity final consumption is 263.32 TWh. Energy use in Indonesia has been long dominated by fossil resources. Once a major oil exporter in the world and joined OPEC in 1962, the country has since become a net oil importer despite still joined OPEC until 2016, making it the only net oil importer member in the organization. Indonesia is also the fourth-largest biggest coal producer and one of the biggest coal exporter in the world, with 24,910 million tons of proven coal reserves as of 2016, making it the 11th country with the most coal reserves in the world. In addition, Indonesia has abundant renewable energy potential, reaching almost 417,8 gigawatt (GW) which consisted of solar, wind, hydro, geothermal energy, ocean current, and bioenergy, although only 2,5% have been utilized. Furthermore, Indonesia along with Malaysia, have two-thirds of ASEAN's gas reserves with total annual gas production of more than 200 billion cubic meters in 2016.The Government of Indonesia has outlined several commitments to increase clean energy use and reduce greenhouse gas emissions, among others by issuing the National Energy General Plan (RUEN) in 2017 and joining the Paris Agreement. In the RUEN, Indonesia targets New and Renewable Energy to reach 23% of the total energy mix by 2025 and 31% by 2050. The country also commits to reduce its greenhouse gas emissions by 29% by 2030 against a business-as-usual baseline scenario, and up to 41% by international support.Indonesia has several high-profile renewable projects, such as the wind farm 75 MW in Sidenreng Rappang Regency, another wind farm 72 MW in Jeneponto Regency, and Cirata Floating Solar Power Plant in West Java with a capacity of 145 MW which will become the largest Floating Solar Power Plant in Southeast Asia. Overview According to the IEA, energy production increased 34% and export 76% from 2004 to 2008 in Indonesia. In 2017, Indonesia had 52,859 MW of installed electrical capacity, 36,892 MW of which were on the Java–Bali grid. In 2022, Indonesia had an electrical capacity of 81.2 GW with a projected capacity of 85.1 GW for 2023. Energy by sources Fossil fuel energy sources Coal Indonesia has a lot of medium and low-quality thermal coal, and there are price caps on supplies for domestic power stations, which discourages other types of electricity generation. At current rates of production, Indonesia's coal reserves are expected to last for over 80 years. In 2009 Indonesia was the world's second top coal exporter, sending coal to China, India, Japan, Italy and other countries. Kalimantan (Borneo) and South Sumatra are the centres of coal mining. In recent years, production in Indonesia has been rising rapidly, from just over 200 mill tons in 2007 to over 400 mill tons in 2013. In 2013, the chair of the Indonesian Coal Mining Association said the production in 2014 may reach 450 mill tons.The Indonesian coal industry is rather fragmented. Output is supplied by a few large producers and a large number of small firms. Large firms in the industry include the following: PT Bumi Resources (the controlling shareholder of large coal firms PT Kaltim Prima Coal and PT Arutmin Indonesia) PT Adaro Energy PT Kideco Jaya Agung PT Indo Tambangraya Megah PT Berau Coal PT Tambang Batubara Bukit Asam (state-owned)Coal production poses risks for deforestation in Kalimantan. According to one Greenpeace report, a coal plant in Indonesia has decreased the fishing catches and increased the respiratory-related diseases. Oil Oil is a major sector in the Indonesian economy. During the 1980s, Indonesia was a significant oil-exporting country. Since 2000, domestic consumption has continued to rise while production has been falling, so in recent years Indonesia has begun importing increasing amounts of oil. Within Indonesia, there are considerable amounts of oil in Sumatra, Borneo, Java, and West Papua Province. There are said to be around 60 basins across the country, only 22 of which have been explored and exploited. Main oil fields in Indonesia include the following: Minas. The Minas field, in Riau, Sumatra, operated by the US-based firm Chevron Pacific Indonesia, is the largest oil block in Indonesia. Output from the field is around 20-25% of current annual oil production in Indonesia. Duri. The Duri field, in Bengkalis Regency, Riau, Sumatra, is operated by the US-based firm Chevron Pacific Indonesia. Rokan. The Rokan field, Riau, Sumatra, operated by Chevron Pacific Indonesia, is a recently developed large field in the Rokan Hilir Regency. Cepu. The Cepu field, operated by Mobil Cepu Ltd which is a subsidiary of US-based ExxonMobil, is on the border of Central and East Java near the town of Tuban. The field was discovered in March 2001 and is estimated to have proven reserves of 600 million barrels of oil and 1.7 trillion cu feet of gas. Development of the field has been subject to on-going discussions between the operators and the Indonesian government. Output is forecast to rise from around 20,000 bpd in early 2012 to around 165,000 bpd in late 2014. Gas There is growing recognition in Indonesia that the gas sector has considerable development potential. In principle, the Indonesian government is supporting moves to give increasing priority to investment in natural gas. In practice, private sector investors, especially foreign investors, have been reluctant to invest because many of the problems that are holding back investment in the oil sector also affect investment in gas. In mid-2013, main potential gas fields in Indonesia were believed to include the following: Mahakam. The Mahakam block in East Kalimantan, under the management of Total E&P Indonesie with participation from the Japanese oil and gas firm Inpex, provides around 30% of Indonesia's natural gas output. In mid 2013 the field was reported to be producing around 1.7 billion cu ft (48 million m3) per day of gas as well as 67,000 barrels (10,700 m3) of condensate. At the time discussions were underway about the details of the future management of the block involving a proposal that Pertamina take over all or part of the management of the block. In October 2013 it was reported that Total E&P Indonesie had announced that it would stop exploration for new projects at the field. In 2015 the Energy and Resources Minister issued a regulation stipulating that the management of the block would be transferred from Total E&P Indonesie and Inpex, which had managed the field for over 50 years since 1966, to Pertamina. In late 2017, it was announced that Pertamina Hulu Indonesia, a subsidiary of Pertamina, would take over management of the block on 1 January 2018. Tangguh. The Tangguh field in Bintuni Bay in West Papua Province operated by BP (British Petroleum) is estimated to have proven gas reserves of 4.4 trillion cu ft (120 billion m3). It is hoped that annual output of the field in the near future might reach 7.6 million tons of liquefied natural gas. Arun. The Arun field in Aceh has been operated by ExxonMobil since the 1970s. The reserves at the field are now largely depleted so production is now slowly being phased out. At the peak, the Arun field produced around 3.4 million cu ft (96 thousand m3) of gas per day (1994) and about 130,000 of condensate per day (1989). ExxonMobil affiliates also operate the nearby South Lhoksukon A and D fields as well as the North Sumatra offshore gas field. In September 2015, ExxonMobil Indonesia sold its assets in Aceh to Pertamina. The sale included the divestment by ExxonMobil of its assets (100%) in the North Sumatra Offshore block, its interests (100%) in B block, and its stake (30%) in the PT Arun Natural Gas Liquefaction (NGL) plant. Following the completion of the deal, Pertamina will have an 85% stake in the Arun NGL plant. East Natuna. The East Natuna gas field (formerly known as Natuna D-Alpha) in the Natuna Islands in the South China Sea is believed to be one of the biggest gas reserves in Southeast Asia. It is estimated to have proven reserves of 46 trillion cu ft (1.3 trillion m3) of gas. The aim is to begin expanded production in 2020 with production rising to 4,000 million cu ft/d (110 million m3/d) sustained for perhaps 20 years. Banyu Urip. The Banyu Urip field, a major field for Indonesia, is in the Cepu block in Bojonegoro Regency in East Java. Interests in the block are held by Pertamina (45%) through its subsidiary PT Pertamina EP Cepu and ExxonMobil Cepu Limited (45%) which is a subsidiary of ExxonMobil Corporation. ExxonMobil is the operator of the block. Masela. The Masela field, currently (early 2016) under consideration for development by the Indonesian Government, is situated to the east of Timor Island, roughly halfway between Timor and Darwin in Australia. The main investors in the field are currently (early 2016) Inpex and Shell who hold stakes of 65% and 35% respectively. The field, if developed, is likely to become the biggest deepwater gas project in Indonesia, involving an estimated investment of between $14–19 billion. Over 10 trillion cu ft (280 billion m3) of gas are said to exist in the block. However, development of the field is being delayed over uncertainty as to whether the field might be operated through an offshore or onshore processing facility. In March 2016, after a row between his ministers, President Jokowi decreed that the processing facility should be onshore. This change of plans will involve the investors in greatly increased costs and will delay the start of the project. It was proposed that they submit revised Plans of Development (POD) to the Indonesian Government. See also List of gas fields in Indonesia. Shale There is potential for tight oil and shale gas in northern Sumatra and eastern Kalimantan. There are estimated to be 46 trillion cu ft (1.3 trillion m3) of shale gas and 7.9 billion barrels (1.26×109 m3) of shale oil which could be recovered with existing technologies. Pertamina has taken the lead in using hydraulic fracturing to explore for shale gas in northern Sumatra. Chevron Pacific Indonesia and NuEnergy Gas are also pioneers in using fracking in existing oil fields and in new exploration. Environmental concerns and a government-imposed cap on oil prices present barriers to full development of the substantial shale deposits in the country. Sulawesi, Seram, Buru, Papua in eastern Indonesia have shales that were deposited in marine environments which may be more brittle and thus more suitable for fracking than the source rocks in western Indonesia which have higher clay content. Coal bed methane With 453 trillion cu ft (12.8 trillion m3) of coal bed methane (CBM) reserve mainly in Kalimantan and Sumatra, Indonesia has potential to redraft its energy charts as United States with its shale gas. With low enthusiasm to develop CBM project, partly in relation to environmental concern regarding emissions of greenhouse gases and contamination of water in the extraction process, the government targeted 8.9 million cu ft (250 thousand m3) per day at standard pressure for 2015. Renewable energy sources Indonesia has set a target of 23% and 31% of its energy to come from renewable sources by 2025 and 2050 respectively. In 2020, Renewable in Indonesia has contributed 11.2% to the national energy mix, with hydro and geothermal power plants making up the largest share. Despite the substantial renewable energy potential, Indonesia is still struggling to achieve its renewable target. The lack of adequate regulation supports to attract the private sector and the regulation inconsistency are often cited among the main reasons for the problems. One policy requires private investors to transfer their projects to PLN (the sole electricity off-taker in the country) at the end of agreement periods, which, combined with the fact that the Minister for Energy and Mineral Resources sets the consumer price of energy, has led to concern about return on investment. Another issue is related to financing, as to achieve the 23% target, Indonesia needs an investment of about US$154 billion. The state is unable to allocate this huge amount meanwhile there is reluctance from both potential investors and lending banks to get involved. There is also a critical challenge related to cost. The initial investment of the renewable projects is still high and as the electricity price has to be below the Region Generation Cost (BPP) (which is already low enough in some major areas), it makes the project unattractive. Indonesia also has large coal reserves and is one of the world's largest net exporters of coal, making it less urgent to develop renewable-based power plants compared to countries that depend on coal imports.It is recommended that the country removes subsidies for fossil fuels, establishes a ministry of renewable energy, improves grid management, mobilizes domestic resources to support renewable energy, and facilitates market entry for international investors. Continued reliance on fossil fuels by Indonesia may leave its coal assets stranded and result in significant investments lost as renewable energy is rapidly becoming cost-efficient worldwide.In February 2020, it was announced that the People's Consultative Assembly is preparing its first renewable energy bill. Biomass An estimated 55% of Indonesia's population, 128 million people, primarily rely upon traditional biomass (mainly wood) for cooking. Reliance on this source of energy has the disadvantage that poor people in rural areas have little alternative but to collect timber from forests, and often cut down trees, to collect wood for cooking. A pilot project of Palm Oil Mill Effluent (POME) Power Generator with the capacity of 1 Megawatt has been inaugurated in September 2014. Hydroelectricity Indonesia has 75 GW of hydro potential, although only around 5 GW has been utilized. Currently, only 34GW of Indonesia's total hydro potential can feasibly be utilized due to high development costs in certain areas. Indonesia also set a target of 2 GW installed capacity in hydroelectricity, including 0.43 GW micro-hydro, by 2025. Indonesia has a potential of around 459.91 MW for micro hydropower developments, with only 4.54% of it being currently exploited. Geothermal energy Indonesia uses some geothermal energy. According to the Renewable Energy Policy Network's Renewables 2013 Global Status Report, Indonesia has the third largest installed generating capacity in the world. With 1.3 GW installed capacity, Indonesia trails only the United States (3.4 GW) and the Philippines (1.9 GW), ahead of Mexico (1.0 GW), Italy (0.9 GW), New Zealand (0.8 GW), Iceland (0.7 GW), and Japan (0.5 GW). The current official policy is to encourage the increased use of geothermal energy for electricity production. Geothermal sites in Indonesia include the Wayang Windu Geothermal Power Station and the Kamojang plant, both in West Java. The development of the sector has been proceeding rather more slowly than hoped. Expansion appears to be held up by a range of technical, economic, and policy issues which have attracted considerable comment in Indonesia. However, it has proved difficult to formulate policies to respond to the problems.Two new plants are slated to open in 2020, at Dieng Volcanic Complex in Central Java and at Mount Patuha in West Java. Wind power On average, low wind speeds mean that for many locations there is limited scope for large-scale energy generation from wind in Indonesia. Only small (<10 kW) and medium (<100 kW) generators are feasible. For Sumba Island in East Nusa Tengarra (NTT), according to NREL, three separate technical assessments have found that "Sumba’s wind resources could be strong enough to be economically viable, with the highest estimated wind speeds ranging from 6.5 m/s to 8.2 m/s on an annual average basis." A very small amount of (off-grid) electricity is generated using wind power. For example, a small plant was established at Pandanmino, a small village on the south coast of Java in Bantul Regency, Yogyakarta Province, in 2011. However, it was established as an experimental plant and it is not clear whether funding for long-term maintenance will be available.In 2018, Indonesia installed its first wind farm, the 75 MW Sidrap, in Sidenreng Rappang Regency, South Sulawesi, which is the biggest wind farm in Southeast Asia. In 2019, Indonesia installed another wind farm with a capacity of 72 MW, in Jeneponto Regency, South Sulawesi. Solar power The Indonesian solar PV sector is relatively underdeveloped but has significant potential, up to 207 GW with utilization in the country is less than 1%. However, a lack of consistent and supportive policies, the absence of attractive tariff and incentives, as well as concerns about on-grid readiness pose barriers to the rapid installation of solar power in Indonesia, including in rural areas. Tidal Power With over 17,000 islands within its borders, Indonesia has great potential for tidal power development. The Alas Strait, a 50km stretch of ocean between Lombok and Sumbawa Island, alone could potentially yield as high as 640GWh of energy annually from tidal power. As of 2023, despite evidence of high energy potential, no Indonesian tidal power facilities have been developed. Use of energy Transport sector Much of the energy in Indonesia is used for domestic transportation. The dominance of private vehicles - mostly cars and motorbikes - in Indonesia has led to an enormous demand for fuel. Energy consumption in the transport sector is growing by about 4.5% every year. There is therefore an urgent need for policy reform and infrastructure investment to enhance the energy efficiency of transport, particularly in urban areas.There are large opportunities to reduce both the energy consumption from the transport sector, for example through the adoption of higher energy efficiency standards for private cars/motorbikes and expanding mass transit networks. Many of these measures would be more cost-effective than the current transport systems. There is also scope to reduce the carbon intensity of transport energy, particularly through replacing diesel with biodiesel or through electrification. Both would require comprehensive supply chain analysis to ensure that the biofuels and power plants are not having wider environmental impacts such as deforestation or air pollution. Electricity sector Access to electricity Over 50% of households in 2011 had an electricity connection. An estimated 63 million people in 2011 did not have direct access to electricity.However, by 2019, 98.9% of the population had access to electricity.Organisations The electricity sector, dominated by the state-owned electricity utility Perusahaan Listrik Negara, is another major consumer of primary energy. Government policy Carbon tax Carbon tax provisions are regulated in Article 13 of the Law 7/2021 in which carbon tax will be imposed on entities producing carbon emissions that have a negative impact on the environment. Based on the Law 7/2021, the imposition of carbon tax will be carried out by focusing on two specific schemes i.e., the carbon tax scheme (cap and tax) and the carbon trade scheme (cap and trade). In the carbon trade scheme, individual or company ("entities") that produce emissions exceeding the cap are required to purchase for an emission permit certificate ("Sertifikat Izin Emisi"/SIE) other entities that produce emissions below the cap. In addition, entities can also purchase emission reduction certificates ("Sertifikat Penurunan Emisi"/SPE). However, if the entity is unable to purchase SIE or SPE in full for the resulting emissions, the cap and tax scheme will apply where entities producing residual emissions that exceed the cap will be subject to carbon tax. Major energy companies in Indonesia Indonesian firms Pertamina, the state-owned oil company Pertamina Gas Negara, the state-owned gas company, subsidiary of Pertamina Perusahaan Listrik Negara, the state-owned electricity company. PT Bumi Resources owned by the Bakrie Group PT Medco Energi International, the largest publicly listed oil and gas company in Indonesia Adaro Energy, one of the largest coal mining companies in IndonesiaForeign firms US-based firm PT Chevron Pacific Indonesia is the largest producer of crude oil in Indonesia; Chevron produces (2014) around 40% of the crude oil in Indonesia Total E&P Indonesia which operates the East Mahakam field in Kalimantan and other fields ExxonMobil is one of the main foreign operators in Indonesia Equinor, a Norwegian multinational firm, which has been operating in Indonesia since 2007, especially in Eastern Indonesia BP which is a major LNG operator in the Tangguh gas field in West Papua. ConocoPhillips which currently operates four production-sharing contracts including at Natuna and in Sumatra. Inpex, a Japanese firm established in 1966 as North Sumatra Offshore Petroleum Exploration Co. Ltd. Greenhouse gas emissions The CO2 emissions of Indonesia in total were over Italy in 2009. However, in all greenhouse gas emissions including construction and deforestation in 2005 Indonesia was top-4 after China, US and Brazil. The carbon intensity of electricity generation is higher than most other countries at over 600 gCO2/kWh. See also Nuclear power in Indonesia List of power stations in Indonesia Rural electrification List of main infrastructure projects in Indonesia List of renewable energy topics by country List of gas fields in Indonesia == References ==
heavy industry
Heavy industry is an industry that involves one or more characteristics such as large and heavy products; large and heavy equipment and facilities (such as heavy equipment, large machine tools, huge buildings and large-scale infrastructure); or complex or numerous processes. Because of those factors, heavy industry involves higher capital intensity than light industry does, and is also often more heavily cyclical in investment and employment. Though important to economic development and industrialization of economies, heavy industry can also have significant negative side effects: both local communities and workers frequently encounter health risks, heavy industries tend to produce byproducts that both pollute the air and water, and the industrial supply chain is often involved in other environmental justice issues from mining and transportation. Because of their intensity, heavy industries are also significant contributors to greenhouse gas emissions that cause climate change, and certain parts of the industries, especially high-heat processes used in metal working and cement production, are hard to decarbonize. Industrial activities such as mining also results in pollution consisting of heavy metals. Heavy metals are very damaging to the environment because they cannot be chemically degraded. Types Transportation and construction along with their upstream manufacturing supply businesses have been the bulk of heavy industry throughout the industrial age, along with some capital-intensive manufacturing. Traditional examples from the mid-19th century through the early 20th included steelmaking, artillery production, locomotive manufacturing, machine tool building, and the heavier types of mining. From the late 19th century through the mid-20th, as the chemical industry and electrical industry developed, they involved components of both heavy industry and light industry, which was soon also true for the automotive industry and the aircraft industry. Modern shipbuilding (since steel replaced wood) and large components such as ship turbochargers are also characteristic of heavy industry.A typical heavy industry activity is the production of large systems, such as the construction of skyscrapers and large dams during the post–World War II era, and the manufacture/deployment of large rockets and giant wind turbines through the 21st century. As part of economic strategy Many East Asian countries relied on heavy industry as key parts of their development strategies and many still do for economic growth. This reliance on heavy industry is typically a matter of government economic policy. Among Japanese and Korean firms with "heavy industry" in their names, many are also manufacturers of aerospace products and defense contractors to their respective countries' governments such as Japan's Mitsubishi Heavy Industries and Fuji Heavy Industries, and Korea's Hyundai Rotem, a joint project of Hyundai Heavy Industries and Daewoo Heavy Industries.In 20th-century communist states, the planning of the economy often focused on heavy industry as an area for large investments (at the expense of investing in the greater production of in-demand consumer goods), even to the extent of painful opportunity costs on the production–possibility frontier (classically, "lots of guns and not enough butter"). This was motivated by fears of failing to maintain military parity with foreign capitalist powers. For example, the Soviet Union's industrialization in the 1930s, with heavy industry as the favored emphasis, sought to bring its ability to produce trucks, tanks, artillery, aircraft, and warships up to a level that would make the country a great power. China under Mao Zedong pursued a similar strategy, eventually culminating in the Great Leap Forward of 1958–1960; an unsuccessful attempt to rapidly industrialize and collectivize, that led to the largest famine in human history, killing up to 50 million people, whilst simultaneously severely depleting the production of agricultural products and not increasing the output of usable-quality industrial goods. In zoning Heavy industry is also sometimes a special designation in local zoning laws, allowing placement of industries with heavy impacts (on environment, infrastructure, and employment) with planning. For example, the zoning restrictions for landfills usually take into account the heavy truck traffic that will exert expensive wear on the roads leading to the landfill. Environmental impacts Greenhouse gas emissions As of 2019, heavy industry emits about 22% of global greenhouse gas emissions: high temperature heat for heavy industry being about 10% of global emissions. The steel industry alone was responsible for 7 to 9% of the global carbon dioxide emissions which is inherently related to the main production process via reduction of iron with coal. In order to reduce these carbon dioxide emissions, carbon capture and utilization and carbon capture and storage technology is looked at. Heavy industry has the advantage of being a point source which is less energy-intensive to apply the latter technologies and results in a cheaper carbon capture compared to direct air capture. Pollution Industrial activities such as the improper disposal of radioactive material, burning coal and fossil fuels, and releasing liquid waste into the environment contribute to the pollution of water, soil, air, and wildlife.In regards to water pollution, when waste is disposed of in the environment, it affects the quality of the available water supply which has a negative impact on the ecosystem along with water supply used by farms for irrigation which in turn affects crops. Heavy metals have also been shown to pollute soil, deteriorating arable land quality and adversely impacting food safety (such as vegetables or grain). This occurs as a result of heavy industry when those heavy metals sink into the ground contaminating the crops that reside among it.Heavy metal concentrations resulting from water and/or soil pollution can become deadly once they pass certain thresholds, which lead to plant poisoning. Heavy metals can further affect many levels of the ecosystem through bioaccumulation, because humans and many other animals rely on these plant species as sources of food. Plants can pick up these metals from the soil and begin the metal transfer to higher levels of the food chain, and eventually reaching humans.Regarding air pollution: long-term or short-term exposure of children to industry-based air pollution can cause several adverse effects, such as cardiovascular diseases, respiratory diseases and even death. Children are also more susceptible to air pollution detriments than adults. Heavy metals such as lead, chromium, cadmium, and arsenic form dust fall particles and are harmful to the human body, with the latter two being carcinogens. As a result of pollution, the toxic chemicals released into the atmosphere also contributes to global warming due to the increase of radiation absorbed. Sacrifice zones References External links Definition of "heavy industry" according to Investopedia.com
carbon dioxide removal
Carbon dioxide removal (CDR), also known as carbon removal, greenhouse gas removal (GGR) or negative emissions, is a process in which carbon dioxide gas (CO2) is removed from the atmosphere by deliberate human activities and durably stored in geological, terrestrial, or ocean reservoirs, or in products.: 2221  In the context of net zero greenhouse gas emissions targets, CDR is increasingly integrated into climate policy, as an element of climate change mitigation strategies. Achieving net zero emissions will require both deep cuts in emissions and the use of CDR. CDR can counterbalance emissions that are technically difficult to eliminate, such as some agricultural and industrial emissions.: 114 CDR methods include afforestation, reforestation, agricultural practices that sequester carbon in soils (carbon farming), wetland restoration and blue carbon approaches, bioenergy with carbon capture and storage (BECCS), ocean fertilization, ocean alkalinity enhancement, and direct air capture when combined with storage,: 115  To assess whether negative emissions are achieved by a particular process, comprehensive life cycle analysis of the process must be performed. As of 2023, CDR is estimated to remove around 2 gigatons of CO2 per year, which is equivalent to 4% of the greenhouse gases emitted per year by human activities.: 8  However, there is significant uncertainty around this number because there is no established or accurate method of quantifying the amount of carbon removed from the atmosphere. There is potential to remove and sequester up to 10 gigatons of carbon dioxide per year by using those existing CDR methods which can be safely and economically deployed now. Definitions Carbon dioxide removal (CDR) is defined by the IPCC as: Anthropogenic activities removing CO2 from the atmosphere and durably storing it in geological, terrestrial, or ocean reservoirs, or in products. It includes existing and potential anthropogenic enhancement of biological or geochemical sinks and direct air capture and storage, but excludes natural CO2 uptake not directly caused by human activities.: 2221  Synonyms for CDR include greenhouse gas removal (GGR), negative emissions technology, and carbon removal. Technologies have been proposed for removing non-CO2 greenhouse gases such as methane from the atmosphere, but only carbon dioxide is currently feasible to remove at scale. Therefore, in most contexts, greenhouse gas removal means carbon dioxide removal. The term geoengineering (or climate engineering) is sometimes used in the scientific literature for both CDR or SRM (solar radiation management), if the techniques are used at a global scale.: 6–11  The terms geoengineering or climate engineering are no longer used in IPCC reports. Categories CDR methods can be placed in different categories that are based on different criteria:: 114  Role in the carbon cycle (land-based biological; ocean-based biological; geochemical; chemical); or Timescale of storage (decades to centuries; centuries to millennia; thousand years or longer) Concepts using similar terminology CDR can be confused with carbon capture and storage (CCS), a process in which carbon dioxide is collected from point-sources such as gas-fired power plants, whose smokestacks emit CO2 in a concentrated stream. The CO2 is then compressed and sequestered or utilized. When used to sequester the carbon from a gas-fired power plant, CCS reduces emissions from continued use of the point source, but does not reduce the amount of carbon dioxide already in the atmosphere. Role in climate change mitigation Use of CDR reduces the overall rate at which humans are adding carbon dioxide to the atmosphere.: 114  The Earth's surface temperature will stabilize only after global emissions have been reduced to net zero, which will require both aggressive efforts to reduce emissions and deployment of CDR.: 114  It is not feasible to bring net emissions to zero without CDR as certain types of emissions are technically difficult to eliminate.: 1261  Emissions that are difficult to eliminate include nitrous oxide emissions from agriculture,: 114  aviation emissions,: 3  and some industrial emissions.: 114  In climate change mitigation strategies, the use of CDR counterbalances those emissions.: 114 After net zero emissions have been achieved, CDR could be used to reduce atmospheric CO2 concentrations, which could partially reverse the warming that has already occurred by that date. All emission pathways that limit global warming to 1.5 °C or 2 °C by the year 2100 assume the use of CDR in combination with emission reductions.Reliance on large-scale deployment of CDR was regarded in 2018 as a "major risk" to achieving the goal of less than 1.5 °C of warming, given the uncertainties in how quickly CDR can be deployed at scale. Strategies for mitigating climate change that rely less on CDR and more on sustainable use of energy carry less of this risk. The possibility of large-scale future CDR deployment has been described as a moral hazard, as it could lead to a reduction in near-term efforts to mitigate climate change.: 124  The 2019 NASEM report concludes: Any argument to delay mitigation efforts because NETs will provide a backstop drastically misrepresents their current capacities and the likely pace of research progress. When CDR is framed as a form of climate engineering, people tend to view it as intrinsically risky. In fact, CDR addresses the root cause of climate change and is part of strategies to reduce net emissions and manage risks related to elevated atmospheric CO2 levels. Permanence Forests, kelp beds, and other forms of plant life absorb carbon dioxide from the air as they grow, and bind it into biomass. However, these biological stores are considered volatile carbon sinks as the long-term sequestration cannot be guaranteed. For example, natural events, such as wildfires or disease, economic pressures and changing political priorities can result in the sequestered carbon being released back into the atmosphere.Carbon dioxide that has been removed from the atmosphere can also be stored in the Earth's crust by injecting it into the subsurface, or in the form of insoluble carbonate salts. This is because they are removing carbon from the atmosphere and sequestering it indefinitely and presumably for a considerable duration (thousands to millions of years). Current and potential scale As of 2023, CDR is estimated to remove about 2 gigatons of CO2 per year, almost entirely by low-tech methods like reforestation and the creation of new forests. This is equivalent to 4% of the greenhouse gases emitted per year by human activities.: 8  A 2019 consensus study report by NASEM assessed the potential of all forms of CDR other than ocean fertilization that could be deployed safely and economically using current technologies, and estimated that they could remove up to 10 gigatons of CO2 per year if fully deployed worldwide. In 2018, all analyzed mitigation pathways that would prevent more than 1.5 °C of warming included CDR measures.Some mitigation pathways propose achieving higher rates of CDR through massive deployment of one technology, however these pathways assume that hundreds of millions of hectares of cropland are converted to growing biofuel crops. Further research in the areas of direct air capture, geologic sequestration of carbon dioxide, and carbon mineralization could potentially yield technological advancements that make higher rates of CDR economically feasible. Methods Overview listing based on technology readiness level The following is a list of known CDR methods in the order of their technology readiness level (TRL). The ones at the top have a high TRL of 8 to 9 (9 being the maximum possible value, meaning the technology is proven), the ones at the bottom have a low TRL of 1 to 2, meaning the technology is not proven or only validated at laboratory scale.: 115  Afforestation/ reforestation Soil carbon sequestration in croplands and grasslands Peatland and coastal wetland restoration Agroforestry, improved forest management Biochar carbon removal (BCR) Direct air carbon capture and storage (DACCS), bioenergy with carbon capture and storage (BECCS) Enhanced weathering (alkalinity enhancement) 'Blue carbon management' in coastal wetlands (restoration of vegetated coastal ecosystems; an ocean-based biological CDR method which encompasses mangroves, salt marshes and seagrass beds) Ocean fertilisation, ocean alkalinity enhancement that amplifies the Oceanic carbon cycleThe CDR methods with the greatest potential to contribute to climate change mitigation efforts as per illustrative mitigation pathways are the land-based biological CDR methods (primarily afforestation/reforestation (A/R)) and/or bioenergy with carbon capture and storage (BECCS). Some of the pathways also include direct air capture and storage (DACCS).: 114 Afforestation, reforestation, and forestry management Trees use photosynthesis to absorb carbon dioxide and store the carbon in wood and soils. Afforestation is the establishment of a forest in an area where there was previously no forest.: 1794  Reforestation is the re-establishment of a forest that has been previously cleared.: 1812  Forests are vital for human society, animals and plant species. This is because trees keep air clean, regulate the local climate and provide a habitat for numerous species.As trees grow they absorb CO2 from the atmosphere and store it in living biomass, dead organic matter and soils. Afforestation and reforestation – sometimes referred to collectively as 'forestation' – facilitate this process of carbon removal by establishing or re-establishing forest areas. It takes forests approximately 10 years to ramp- up to the maximum sequestration rate.: 26–28 Depending on the species, the trees will reach maturity after around 20 to 100 years, after which they store carbon but do not actively remove it from the atmosphere.: 26–28  Carbon can be stored in forests indefinitely, but the storage can also be much more short-lived as trees are vulnerable to being cut, burned, or killed by disease or drought.: 26–28  Once mature, forest products can be harvested and the biomass stored in long-lived wood products, or used for bioenergy or biochar. Consequent forest regrowth then allows continuing CO2 removal.: 26–28 Risks to deployment of new forest include the availability of land, competition with other land uses, and the comparatively long time from planting to maturity.: 26–28 Agricultural practices Carbon farming is a name for a variety of agricultural methods aimed at sequestering atmospheric carbon into the soil and in crop roots, wood and leaves. Increasing a soil's organic matter content can aid plant growth, increase total carbon content, improve soil water retention capacity and reduce fertilizer use. Carbon farming methods will typically have a cost, meaning farmers and land-owners need a way to profit from the use of carbon farming, thus requiring government programs. Bioenergy with carbon capture & storage (BECCS) Biochar Carbon Removal (BCR) Biochar is created by the pyrolysis of biomass, and is under investigation as a method of carbon sequestration. Biochar is a charcoal that is used for agricultural purposes which also aids in carbon sequestration, the capture or hold of carbon. It is created using a process called pyrolysis, which is basically the act of high temperature heating biomass in an environment with low oxygen levels. What remains is a material known as char, similar to charcoal but is made through a sustainable process, thus the use of biomass. Biomass is organic matter produced by living organisms or recently living organisms, most commonly plants or plant based material. A study done by the UK Biochar Research Center has stated that, on a conservative level, biochar can store 1 gigaton of carbon per year. With greater effort in marketing and acceptance of biochar, the benefit of Biochar Carbon Removal could be the storage of 5–9 gigatons per year in soils. However, at the moment, biochar is restricted by the terrestrial carbon storage capacity, when the system reaches the state of equilibrium, and requires regulation because of threats of leakage. Direct air capture with carbon sequestration (DACCS) Direct ocean removal There are several methods of sequestering carbon from the ocean, where dissolved carbonate in the form of carbonic acid is in equilibrium with atmospheric carbon dioxide. These include ocean fertilization, the purposeful introduction of plant nutrients to the upper ocean. While one of the more well-researched carbon dioxide removal approaches, ocean fertilization would only sequester carbon on a timescale of 10-100 years. While surface ocean acidity may decrease as a result of nutrient fertilization, sinking organic matter will remineralize, increasing deep ocean acidity. A 2021 report on CDR indicates that there is medium-high confidence that the technique could be efficient and scalable at low cost, with medium environmental risks. Ocean fertilization is estimated to be able to sequester 0.1 to 1 gigatonnes of carbon dioxide per year at a cost of USD $8 to $80 per tonne.Ocean alkalinity enhancement involves grinding, dispersing, and dissolving minerals such as olivine, limestone, silicates, or calcium hydroxide to precipitate carbonate sequestered as deposits on the ocean floor. The removal potential of alkalinity enhancement is uncertain, and estimated at between 0.1 to 1 gigatonnes of carbon dioxide per year at a cost of USD $100 to $150 per tonne.Electrochemical techniques such as electrodialysis can remove carbonate from seawater using electricity. While such techniques used in isolation are estimated to be able to remove 0.1 to 1 gigatonnes of carbon dioxide per year at a cost of USD $150 to $2,500 per tonne, these methods are much less expensive when performed in conjunction with seawater processing such as desalination, where salt and carbonate are simultaneously removed. Preliminary estimates suggest that the cost of such carbon removal can be paid for in large part if not entirely from the sale of the desalinated water produced as a byproduct. Issues Economic issues The cost of CDR differs substantially depending on the maturity of the technology employed as well as the economics of both voluntary carbon removal markets and the physical output; for example, the pyrolysis of biomass produces biochar that has various commercial applications, including soil regeneration and wastewater treatment. In 2021 DAC cost from $250 to $600 per ton, compared to $100 for biochar and less than $50 for nature-based solutions, such as reforestation and afforestation. The fact that biochar commands a higher price in the carbon removal market than nature-based solutions reflects the fact that it is a more durable sink with carbon being sequestered for hundreds or even thousands of years while nature-based solutions represent a more volatile form of storage, which risks related to forest fires, pests, economic pressures and changing political priorities. The Oxford Principles for Net Zero Aligned Carbon Offsetting states that to be compatible with the Paris Agreement: "...organizations must commit to gradually increase the percentage of carbon removal offsets they procure with the view of exclusively sourcing carbon removals by mid-century." These initiatives along with the development of new industry standards for engineered carbon removal, such as the Puro Standard, will help to support the growth of the carbon removal market.Although CDR is not covered by the EU Allowance as of 2021, the European Commission is preparing for carbon removal certification and considering carbon contracts for difference. CDR might also in future be added to the UK Emissions Trading Scheme. As of end 2021 carbon prices for both these cap-and-trade schemes currently based on carbon reductions, as opposed to carbon removals, remained below $100.As of early 2023, financing has fell short of the sums required for high-tech CDR methods to contribute significantly to climate change mitigation. Though available funds have recently increased substantially. Most of this increase has been from voluntary private sector initiatives. Such as a private sector alliance led by Stripe with prominent members including Meta, Google and Shopify, which in April 2022 revealed a nearly $1 billion fund to reward companies able to permanently capture & store carbon. According to senior Stripe employee Nan Ransohoff, the fund was "roughly 30 times the carbon-removal market that existed in 2021. But it's still 1,000 times short of the market we need by 2050." The predominance of private sector funding has raised concerns as historically, voluntary markets have proved "orders of magnitude" smaller than those brought about by government policy. As of 2023 however, various governments have increased their support for CDR; these include Sweden, Switzerland, and the US. Recent activity from the US government includes the June 2022 Notice of Intent to fund the Bipartisan Infrastructure Law's $3.5 billion CDR program, and the signing into law of the Inflation Reduction Act of 2022, which contains the 45Q tax to enhance the CDR market. Removal of other greenhouse gases Although some researchers have suggested methods for removing methane, others say that nitrous oxide would be a better subject for research due to its longer lifetime in the atmosphere. See also Biological carbon fixation Carbon dioxide in Earth's atmosphere Carbon dioxide scrubber Carbon-neutral fuel Climate change scenario List of emerging technologies Low-carbon economy Net zero Virgin Earth Challenge References External links Factsheet about CDR by IPCC Sixth Assessment Report WG III Deep Dives by Carbon180. Info about carbon removal solutions. The Road to Ten Gigatons - Carbon Removal Scale Up Challenge Game. Land - the planet's carbon sink, United Nations.
list of u.s. states and territories by carbon dioxide emissions
This is a list of U.S. states and territories by carbon dioxide emissions for energy use, as well as per capita and by area.The state with the highest total carbon dioxide emissions is Texas and the lowest is Vermont. The state with the highest per capita carbon dioxide emissions is Wyoming and the lowest is New York. Table See also Greenhouse gas emissions by the United States List of countries by carbon dioxide emissions Top contributors to greenhouse gas emissions References External links Energy-Related Carbon Dioxide Emissions by State, 2000-2015
gas-fired power plant
A gas-fired power plant, sometimes referred to as gas-fired power station or natural gas power plant, is a thermal power station that burns natural gas to generate electricity. Gas-fired power plants generate almost a quarter of world electricity and are significant sources of greenhouse gas emissions. However, they can provide seasonal, dispatchable energy generation to compensate for variable renewable energy deficits, where hydropower or interconnectors are not available. In the early 2020s batteries became competive with gas peaker plants. Basic concepts: heat into mechanical energy into electrical energy A gas-fired power plant is a type of fossil fuel power station in which chemical energy stored in natural gas, which is mainly methane, is converted successively into: thermal energy, mechanical energy and, finally, electrical energy. Although they cannot exceed the Carnot cycle limit for conversion of heat energy into useful work, the excess heat (ie the difference between the chemical energy used up and the useful work generated) may be used in cogeneration plants to heat buildings, to produce hot water, or to heat materials on an industrial scale. Plant types Simple cycle gas-turbine In a simple cycle gas-turbine, also known as open-cycle gas-turbine (OCGT), hot gas drives a gas turbine to generate electricity. This type of plant is relatively cheap to build and can start very quickly, but due to its lower efficiency is at most only run for a few hours a day as a peaking power plant. Combined cycle gas-turbine (CCGT) CCGT power plants consist of simple cycle gas-turbines which use the Brayton cycle, followed by a heat recovery steam generator and a steam turbine which use the Rankine cycle. The most common configuration is two gas-turbines supporting one steam turbine. They are more efficient than simple cycle plants and can achieve efficiencies up to 55% and dispatch times of around half an hour. Reciprocating engine Reciprocating internal combustion engines tend to be under 20 MW (thus much smaller than other types of natural gas-fired electricity generator) and are typically used for emergency power or to balance variable renewable energy such as wind and solar. Greenhouse gas emissions In total, gas-fired power stations emit about 450 grams (1 lb) of CO2 per kilowatt-hour of electricity generated. This is about half that of coal-fired power stations but much more than nuclear power plants and renewable energy. Life-cycle emissions of gas-fired power stations may be impacted by methane emissions such as from gas leaks. Carbon capture Very few power plants have carbon capture and storage or carbon capture and utilization. Hydrogen Gas-fired power plants can be modified to run on hydrogen, and according to General Electric a more economically viable option than CCS would be to use more and more hydrogen in the gas turbine fuel. Hydrogen can at first be created from natural gas through steam reforming, or by heating to precipitate carbon, as a step towards a hydrogen economy, thus eventually reducing carbon emissions. Economics New plants Sometimes a new battery storage power station together with solar power or wind power is cheaper in the long-term than building a new gas plant, as the gas plant risks becoming a stranded asset. Existing plants As of 2019 a few gas-fired power plants are being retired because they are unable to stop and start quickly enough. However, despite the falling cost of variable renewable energy most existing gas-fired power plants remain profitable, especially in countries without a carbon price, due to their dispatchable generation and because shale gas and liquefied natural gas prices have fallen since they were built. Even in places with a carbon price, such as the EU, existing gas-fired power stations remain economically viable, partly due to increasing restrictions on coal-fired power because of its pollution. Politics Even when replacing coal power the decision to build a new plant may be controversial. See also List of natural gas power stations External links Global gas plant tracker by Global Energy Monitor == References ==
global warming solutions act of 2006
The Global Warming Solutions Act of 2006, or Assembly Bill (AB) 32, is a California State Law that fights global warming by establishing a comprehensive program to reduce greenhouse gas emissions from all sources throughout the state. AB32 was co-authored by then-Assemblymember Fran Pavley (D-Agoura Hills) and then-Speaker of the California Assembly Fabian Nunez (D-Los Angeles) and signed into law by Governor Arnold Schwarzenegger on September 27, 2006. On June 1, 2005, Governor Schwarzenegger signed an executive order known as Executive Order S-3-05, which established greenhouse gas emissions targets for the state. The executive order required the state to reduce its greenhouse gas emissions levels to 2000 levels by 2010, to 1990 levels by 2020, and to a level 80% below 1990 levels by 2050. However, to implement this measure, the California Air Resources Board (CARB) needed authority from the legislature. The California State Legislature passed the Global Warming Solutions Act to address this issue and gave the CARB authority to implement the program. AB 32 requires the California Air Resources Board (CARB or ARB) to develop regulations and market mechanisms to reduce California's greenhouse gas emissions to 1990 levels by the year of 2020, representing approximately a 30% reduction statewide, with mandatory caps beginning in 2012 for significant emissions sources. The bill also allows the Governor to suspend the emissions caps for up to a year in case of emergency or significant economic harm. The State of California leads the nation in energy efficiency standards and plays a lead role in environmental protection, but is also the 12th largest emitter of carbon worldwide. Greenhouse gas emissions are defined in the bill to include all the following: carbon dioxide, methane, nitrous oxide, sulfur hexafluoride, hydrofluorocarbons and perfluorocarbons. These are the same greenhouse gases listed in Annex A of the Kyoto Protocol. Requirements AB 32 includes several specific requirements of the California Air Resources Board: Prepare and approve a scoping plan for achieving the maximum technologically feasible and cost-effective reductions in greenhouse gas sources or categories of sources of greenhouse gases by 2020. The scoping plan, approved by the ARB Board December 12, 2008, provides the outline for actions to reduce greenhouse gases in California. The approved scoping plan indicates how these emission reductions will be achieved from significant greenhouse gas sources via regulations, market mechanisms and other actions. Identify the statewide level of greenhouse gas emissions in 1990 to serve as the emissions limit to be achieved by 2020. In December 2007, the Board approved the 2020 emission limit of 427 million metric tons of carbon dioxide equivalent of greenhouse gases, however this limit was later revised to 431 million metric tons using updated methods that had been outlined in the IPCC Fourth Assessment Report. Adopt a regulation requiring the mandatory reporting of greenhouse gas emissions. In December 2007, the Board adopted a regulation requiring the largest industrial sources to report and verify their greenhouse gas emissions. The reporting regulation serves as a solid foundation to determine greenhouse gas emissions and track future changes in emission levels. In 2011, the Board adopted the cap-and-trade regulation. The cap-and-trade program covers major sources of GHG emissions in the State such as refineries, power plants, industrial facilities, and transportation fuels. The cap-and-trade program includes an enforceable emissions cap that will decline over time. The State will distribute allowances, which are trad-able permits, equal to the emissions allowed under the cap. Sources under the cap will need to surrender allowances and offsets equal to their emissions at the end of each compliance period. Identify and adopt regulations for discrete early actions that could be enforceable on or before January 1, 2010. The Board identified nine discrete early action measures including regulations affecting landfills, motor vehicle fuels, refrigerants in cars, tire pressure, port operations and other sources in 2007 that included ship electrification at ports and reduction of high GWP gases in consumer products. Ensure early voluntary reductions receive appropriate credit in the implementation of AB 32 Convene an Environmental Justice Advisory Committee (EJAC) to advise the Board in developing the Scoping Plan and any other pertinent matter in implementing AB 32. The EJAC has met 12 times since early 2007, providing comments on the proposed early action measures and the development of the scoping plan, and submitted its comments and recommendations on the scoping plan in October 2008. ARB will continue to work with the EJAC as AB 32 is implemented. Appoint an Economic and Technology Advancement Advisory Committee (ETAAC) to provide recommendations for technologies, research and greenhouse gas emission reduction measures. After a year-long public process, the ETAAC submitted a report of their recommendations to the Board in February 2008. The ETAAC also reviewed and provided comments on the scoping plan. Timeline AB 32 stipulates the following timeline: In late-January 2014, ARB plans to release the draft proposed Scoping Plan Update and Environmental Assessment. In February 2014, ARB will have a Board meeting discussion that will include additional opportunities for stakeholder feedback and public comment. In Spring 2014, ARB will hold a Board Hearing to consider the Final Scoping Plan Update and Environmental Assessment. As of August 2016, AB 32 continues to be built upon. On September 9, Governor Jerry Brown strengthened the commitment to AB 32 by signing SB 32 by Sen. Fran Pavley (D-Agoura Hills) and AB 197 by Assembly member Eduardo Garcia (D-Coachella). This legally enshrined the goal outlined by Executive Order B-30-15, to reduce the state's greenhouse gas emissions 40% below 1990 levels by 2030. On July 17, 2017, both houses of the California State Legislature pass AB 398 with a two-thirds majority vote, which authorizes the California Air Resources Board to operate a cap-and-trade system to achieve these reductions. Achievements To date, ARB has identified nine discrete early action measures to reduce greenhouse gas emissions, including regulations affecting landfills, motor vehicle fuels, refrigerants in cars, tire pressure, port operations and other sources. Regulatory development for additional measures is ongoing. The Environmental Justice Advisory Committee (EJAC) has met 12 times since early 2007 and submitted comments and recommendations on the scoping plan in October 2008. The Economic and Technology Advancement Advisory Committee (ETAAC) submitted a report of their recommendations to the Board in February 2008. The ETAAC also reviewed and provided comments on the scoping plan. In June 2013, ARB held a kickoff public workshop in Sacramento to discuss the development of the Scoping Plan Update, public process, and overall schedule. In July 2013, subsequent regional workshops were held in Diamond Bar; Fresno; and the Bay Area, which provided forums to discuss region-specific issues, concerns, and priorities. Strategies Cap-and-Trade Program: Firm limit on total greenhouse gas emissions. Covers 85% of all emissions statewide; includes participation in the Western Climate Initiative Electricity and Energy: Improved appliance efficiency standards and other energy efficiency measures; goal is for 33% of energy to come from renewable sources by 2020; High Global Warming Potential Gases: reduce emissions and use of refrigerants and certain other gases that have much higher impact, per molecule than carbon dioxide Agriculture: more efficient agricultural equipment, fuel use and water use Transportation: adherence to "Pavley Standards" to achieve reductions in greenhouse gas emissions from motor vehicles Industry: audit and regulate emissions from 800 largest industrial sources statewide, including the cement industry Forestry: preserve forest sequestration and other voluntary programs Waste and Recycling: reduce methane emissions from landfills; reduce waste and increase recycling/reuse AB 32 Scoping Plan Assembly Bill 32 (AB 32) required the California Air Resources Board (ARB or Board) to develop a Scoping Plan that describes the approach California will take to reduce greenhouse gases (GHG) to achieve the goal of reducing emissions to 1990 levels by 2020. The Scoping Plan was first considered by the Board in 2008 and must be updated every five years. ARB is currently in the process of updating the Scoping Plan. Details regarding this update are outlined below. AB 32 Scoping Plan Update The Scoping Plan Update (Update) builds upon the initial Scoping Plan with new strategies and recommendations. The Update identifies opportunities to leverage existing and new funds to further drive GHG emission reductions through strategic planning and targeted low carbon investments. The Update defines ARB's climate change priorities for the next five years and sets the groundwork to reach California's post-2020 climate goals set forth in Executive Orders S-3-05 and B-16-2012. The Update will highlight California's progress toward meeting the near-term 2020 GHG emission reduction goals defined in the initial Scoping Plan. It will also evaluate how to align the State's longer-term GHG reduction strategies with other State policy priorities for water, waste, natural resources, clean energy, transportation, and land use. What are the key focus areas for the Update? ARB plans to focus on six key topics areas for the post-2020 element. These include: (1) transportation, fuels, and infrastructure, (2) energy generation, transmission, and efficiency, (3) waste, (4) water, (5) agriculture, and (6) natural and working lands. What recent activity has occurred in 2013? In June 2013, ARB held a kickoff public workshop in Sacramento to discuss the development of the Scoping Plan Update, public process, and overall schedule. In July 2013, subsequent regional workshops were held in Diamond Bar; Fresno; and the Bay Area, which provided forums to discuss region-specific issues, concerns, and priorities. In addition, ARB accepted and considered informal stakeholder comments from June 13, 2013 through August 5, 2013. ARB also reconvened the Environmental Justice Advisory Committee to advise, and provide recommendations on the development of, this Update. On October 1, 2013, ARB released a discussion draft of the Update to the AB 32 Scoping Plan for public review and comment. On October 15, 2013, ARB held a public workshop and provided an update to the Board at the October 24, 2013 Board Hearing. Extensive public comment and input was received at the October Board Hearing. In addition, over 115 comment letters were submitted on the discussion draft. What activities are planned for 2014? In late-January 2014, ARB plans to release the draft proposed Scoping Plan Update and Environmental Assessment. In February 2014, ARB will have a Board meeting discussion that will include additional opportunities for stakeholder feedback and public comment. In Spring 2014, ARB will hold a Board Hearing to consider the Final Scoping Plan Update and Environmental Assessment. What is the status of AB 32 implementation? The California Global Warming Solutions Act of 2006 (AB 32) has been implemented effectively with a suite of complementary strategies that serve as a model going forward. California is on target for meeting the 2020 GHG emission reduction goal. Many of the GHG reduction measures (e.g., Low Carbon Fuel Standard, Advanced Clean Car standards, and Cap-and-Trade) have been adopted over the last five years and implementation activities are ongoing. California is getting real reductions to put us on track for reducing GHG emissions to achieve the AB 32 goal of getting back to 1990 levels by 2020. Cap-and-Trade On December 17, 2010 ARB adopted a cap-and-trade program to place an upper limit on state-wide greenhouse gas emissions. This is the first program of its kind on this scale in the United States, though in the north-eastern United States, the Regional Greenhouse Gas Initiative (RGGI) works on a similar principle. Through the Western Climate Initiative (WCI), California is working to link its cap and trade system to other states. In October 2013, California officially linked its cap-and-trade program with Quebec Ministry of Sustainable Development, Environment, Wildlife, and Parks. The program had a soft start in 2012, with the first required compliance period starting in 2013. Emissions are to be reduced by two percent each year through 2015 and three percent each year from 2015 to 2020. The rules apply first to utilities and large industrial plants, and in 2015 will begin to be applied to fuel distributors as well, eventually totaling 360 businesses at 600 locations throughout the State of California. Free credits will be distributed to businesses to account for about 90 percent of overall emissions in their sector, but they must buy allowances (credits) at auction, to account for additional emissions. The auction format used will be single round, sealed bid auction. A preliminary auction was held August 30, 2012 with the first actual quarterly auction to take place November 14, 2012. CARB Quarterly Auction Results These auctions demonstrate the following trends: (1) after an initial spike in qualified bidder number, the number of qualified bidders began to decrease; (2) the percentage of 2015 and 2016 allowances sold increased continually to reach 100%; (3) the percentage of current year allowances sold remained constant at 100%; (4) although the settlement prices for current year allowances initially increased, they then began to decrease; (5) the settlement prices for the 2015 or 2016 allowances have increased.Some of the most well-known bidders were California Department of Water Resources, Campbell Soup Supply Company, Chevron U.S.A. Inc., Citigroup Energy Inc., Exxon Mobil Corporation, J.P. Morgan Ventures Energy Corporation, Noble Americas Gas & Power Corp., Pacific Gas and Electric Company, Phillips 66 Company, Shell Energy North America, Silicon Valley Power, Southern California Edison Company, The Bank of Nova Scotia, Union Pacific Railroad Company, and Vista Metals Corp. A qualified bidder is an entity that registered for the auction, submitted an acceptable bid guarantee, and received acceptance from the ARB to participate in the auction. Offsets In addition to emission allowances, CCAs. Compliance entities may also use a certain percentage of offset credits in the system. Offsets credits are generated by projects that reduce emissions or act as sinks for green house gasses. Currently the Air Resources Board allows for different types of offset projects to generate offset credits: U.S. Forest and Urban Forest Project Resources, Livestock Projects (methane emission control), Ozone Depleting Substances Projects, and Urban Forest Projects.Offset provisions in the cap and trade scheme are however controversial and have been challenged in court. In March 2012, Citizens Climate Lobby and Our Children's Earth Foundation, two California environmental groups, sued the California Air Resources Board for the inclusion of its offset provisions. Their request was denied and when the Our Children's Earth Foundation appealed the decision was affirmed. Economic impacts According to ARB, AB 32 is "generating jobs, promoting a growing, clean-energy economy and a healthy environment for California at the same time." AB 32 supports efficiency-driven job growth California gets more clean energy venture capital investment than all states combined Green technologies produce new jobs faster Venture capital investment produces thousands of new jobs Green jobs are growing faster than any other industry California leads the nation in clean technology California's economic powerhouses support AB 32 AB 32 requires California to lower greenhouse gas emissions to 1990 levels by 2020. Climate change will have a significant impact on the sustainability of water supplies in the coming decades. Political challenges The bill was challenged by Proposition 23 on the November 2010 ballot, which aimed to suspend AB 32 until state unemployment stayed below 5.5% for four consecutive quarters. The proposition was defeated by a wide margin. Legal challenges Two lawsuits have been filed challenging the legality of ARB's auctions of GHG emission permits. The petitioners contend that the auctions are not authorized under AB 32, and that the revenues generated by the auctions violate California's Proposition 13 or Proposition 26. A hearing was conducted on both challenges on August 28, 2013, in Sacramento County Superior Court.AB 26 was initiated by Assembly woman Susan Bonilla, District of Concord and it was heard in the Senate Environmental Quality Committee June 19, 2013. The bill is sponsored by the State Building and Construction trades Council, AFL-CIO, and supported by California Teamsters Public Affairs Council and the International Association of Heat and Frost Insulators Local 5. Briefly, the bill is about the labor unions who wants portion from cap and trade's revenue to increasing wages for their workers, getting more jobs and increasing the number of union members that work in the industry that actually produce greenhouse gas emission. In this case, union labor will be fighting environmental groups supportive of AB 32 goals. The bill passed, 7–0.On Nov 12, 2013 The California Chamber of Commerce launched the first industry lawsuit against the auction portion of California's cap-and-trade program on the basis that auctioning off allowances constitutes an unauthorized, unconstitutional tax. The complaint was filed for Sacramento Superior Court and seeks to stop the auction and have the auction regulations declared invalid. However, California superior court has rejected the challenges to the state's cap-and-trade program, upholding a significant element of California's suite of programs to comply with AB 32 and to reduce the state's greenhouse gas emissions. - See also Climate Change Kyoto Protocol Regulation of greenhouse gases under the Clean Air Act Sustainable Communities and Climate Protection Act of 2008 Climate change in California References External links Legal documentation "California Global Warming Solutions Act (AB 32)". c2es.org. Center for Climate and Energy Solutions. Archived from the original on April 15, 2014. Retrieved September 29, 2016. "A Golden Opportunity: California's Solutions for Global Warming". nrdc.org. Natural Resources Defense Council. June 19, 2007. Retrieved September 29, 2016. Cobo, Kimberly (September 1, 2007). "California Global Warming Solutions Act of 2006: Meaningfully Decreasing Greenhouse Gas Emissions or Merely a Set of Empty Promises". Loyola of Los Angeles Law Review. 41 (1). Retrieved December 6, 2018.
greenhouse effect
The greenhouse effect occurs when greenhouse gases in a planet's atmosphere cause some of the heat radiated from the planet's surface to build up at the planet's surface. This process happens because stars emit shortwave radiation that passes through greenhouse gases, but planets emit longwave radiation that is partly absorbed by greenhouse gases. That difference reduces the rate at which a planet can cool off in response to being warmed by its host star. Adding to greenhouse gases further reduces the rate a planet emits radiation to space, raising its average surface temperature. The Earth's average surface temperature would be about −18 °C (−0.4 °F) without the greenhouse effect, compared to Earth's 20th century average of about 14 °C (57 °F), or a more recent average of about 15 °C (59 °F). In addition to naturally present greenhouse gases, burning of fossil fuels has increased amounts of carbon dioxide and methane in the atmosphere. As a result, global warming of about 1.2 °C (2.2 °F) has occurred since the industrial revolution, with the global average surface temperature increasing at a rate of 0.18 °C (0.32 °F) per decade since 1981.The wavelengths of radiation emitted by the Sun and Earth differ because their surface temperatures are different. The Sun has a surface temperature of 5,500 °C (9,900 °F), so it emits most of its energy as shortwave radiation in near-infrared and visible wavelengths (as sunlight). In contrast, Earth's surface has a much lower temperature, so it emits longwave radiation at mid- and far-infrared wavelengths (sometimes called thermal radiation or radiated heat). A gas is a greenhouse gas if it absorbs longwave radiation. Earth's atmosphere absorbs only 23% of incoming shortwave radiation, but absorbs 90% of the longwave radiation emitted by the surface, thus accumulating energy and warming the Earth's surface. Terminology The term greenhouse effect comes from an analogy to greenhouses. Both greenhouses and the greenhouse effect work by retaining heat from sunlight, but the way they retain heat differs. Greenhouses retain heat mainly by blocking convection (the movement of air). In contrast, the greenhouse effect retains heat by restricting radiative transfer through the air and reducing the rate at which heat escapes to space. Discovery and investigation The existence of the greenhouse effect, while not named as such, was proposed as early as 1824 by Joseph Fourier. The argument and the evidence were further strengthened by Claude Pouillet in 1827 and 1838. In 1856 Eunice Newton Foote demonstrated that the warming effect of the sun is greater for air with water vapour than for dry air, and the effect is even greater with carbon dioxide. She concluded that "An atmosphere of that gas would give to our earth a high temperature..."John Tyndall was the first to measure the infrared absorption and emission of various gases and vapors. From 1859 onwards, he showed that the effect was due to a very small proportion of the atmosphere, with the main gases having no effect, and was largely due to water vapor, though small percentages of hydrocarbons and carbon dioxide had a significant effect. The effect was more fully quantified by Svante Arrhenius in 1896, who made the first quantitative prediction of global warming due to a hypothetical doubling of atmospheric carbon dioxide. The term greenhouse was first applied to this phenomenon by Nils Gustaf Ekholm in 1901. Measurement Matter emits thermal radiation in an amount that is directly proportional to the fourth power of its temperature. Some of the radiation emitted by the Earth's surface is absorbed by greenhouse gases and clouds. Without this absorption, Earth's surface would have an average temperature of −18 °C (−0.4 °F). However, because some of the radiation is absorbed, Earth's average surface temperature is around 15 °C (59 °F). Thus, the Earth's greenhouse effect may be measured as a temperature change of 33 °C (59 °F). Thermal radiation is characterized by how much energy it carries, typically in watts per square meter (W/m2). Scientists also measure the greenhouse effect based on how much more longwave thermal radiation leaves the Earth's surface than reaches space.: 968 : 934  Currently, longwave radiation leaves the surface at an average rate of 398 W/m2, but only 239 W/m2 reaches space. Thus, the Earth's greenhouse effect can also be measured as an energy flow change of 159 W/m2.: 968 : 934  The greenhouse effect can be expressed as a fraction (0.40) or percentage (40%) of the longwave thermal radiation that leaves Earth's surface but does not reach space.: 968 Whether the greenhouse effect is expressed as a change in temperature or as a change in longwave thermal radiation, the same effect is being measured. Energy balance and temperature Incoming shortwave radiation Hotter matter emits shorter wavelengths of radiation. As a result, the Sun emits shortwave radiation as sunlight while the Earth and its atmosphere emit longwave radiation. Sunlight includes ultraviolet, visible light, and near-infrared radiation.: 2251 Sunlight is reflected and absorbed by the Earth and its atmosphere. The atmosphere and clouds reflect about 23% and absorb 23%. The surface reflects 7% and absorbs 48%. Overall, Earth reflects about 30% of the incoming sunlight, and absorbs the rest (240 W/m2).: 934 Outgoing longwave radiation The Earth and its atmosphere emit longwave radiation, also known as thermal infrared or terrestrial radiation.: 2251  Informally, longwave radiation is sometimes called thermal radiation. Outgoing longwave radiation (OLR) is the radiation from Earth and its atmosphere that passes through the atmosphere and into space. The greenhouse effect can be directly seen in graphs of Earth's outgoing longwave radiation as a function of frequency (or wavelength). The area between the curve for longwave radiation emitted by Earth's surface and the curve for outgoing longwave radiation indicates the size of the greenhouse effect.Different substances are responsible for reducing the radiation energy reaching space at different frequencies; for some frequencies, multiple substances play a role. Carbon dioxide is understood to be responsible for the dip in outgoing radiation (and associated rise in the greenhouse effect) at around 667 cm−1 (equivalent to a wavelength of 15 microns).Each layer of the atmosphere with greenhouse gases absorbs some of the longwave radiation being radiated upwards from lower layers. It also emits longwave radiation in all directions, both upwards and downwards, in equilibrium with the amount it has absorbed. This results in less radiative heat loss and more warmth below. Increasing the concentration of the gases increases the amount of absorption and emission, and thereby causing more heat to be retained at the surface and in the layers below. Effective temperature The power of outgoing longwave radiation emitted by a planet corresponds to the effective temperature of the planet. The effective temperature is the temperature that a planet radiating with a uniform temperature (a blackbody) would need to have in order to radiate the same amount of energy. This concept may be used to compare the amount of longwave radiation emitted to space and the amount of longwave radiation emitted by the surface: Emissions to space: Based on its emissions of longwave radiation to space, Earth's overall effective temperature is −18 °C (0 °F). Emissions from surface: Based on thermal emissions from the surface, Earth's effective surface temperature is about 16 °C (61 °F),: 934  which is 34 °C (61 °F) warmer than Earth's overall effective temperature.Earth's surface temperature is often reported in terms of the average near-surface air temperature. This is about 15 °C (59 °F), a bit lower than the effective surface temperature. This value is 33 °C (59 °F) warmer than Earth's overall effective temperature. Energy flux Energy flux is the rate of energy flow per unit area. Energy flux is expressed in units of W/m2, which is the number of joules of energy that pass through a square meter each second. Most fluxes quoted in high-level discussions of climate are global values, which means they are the total flow of energy over the entire globe, divided by the surface area of the Earth, 5.1×1014 m2 (5.1×108 km2; 2.0×108 sq mi).The fluxes of radiation arriving at and leaving the Earth are important because radiative transfer is the only process capable of exchanging energy between Earth and the rest of the universe.: 145 Radiative balance The temperature of a planet depends on the balance between incoming radiation and outgoing radiation. If incoming radiation exceeds outgoing radiation, a planet will warm. If outgoing radiation exceeds incoming radiation, a planet will cool. A planet will tend towards a state of radiative equilibrium, in which the power of outgoing radiation equals the power of absorbed incoming radiation.Earth's energy imbalance is the amount by which the power of incoming sunlight absorbed by Earth's surface or atmosphere exceeds the power of outgoing longwave radiation emitted to space. Energy imbalance is the fundamental measurement that drives surface temperature. A UN presentation says "The EEI is the most critical number defining the prospects for continued global warming and climate change." One study argues, "The absolute value of EEI represents the most fundamental metric defining the status of global climate change."Earth's energy imbalance (EEI) was about 0.7 W/m2 as of around 2015, indicating that Earth as a whole is accumulating thermal energy and is in a process of becoming warmer.: 934 Over 90% of the retained energy goes into warming the oceans, with much smaller amounts going into heating the land, atmosphere, and ice. Day and night cycle A simple picture assumes a steady state, but in the real world, the day/night (diurnal) cycle, as well as the seasonal cycle and weather disturbances, complicate matters. Solar heating applies only during daytime. At night the atmosphere cools somewhat, but not greatly because the thermal inertia of the climate system resists changes both day and night, as well as for longer periods. Diurnal temperature changes decrease with height in the atmosphere. Simplified models Simplified models are sometimes used to support understanding of how the greenhouse effect comes about and how this affects surface temperature. Atmospheric layer models The greenhouse effect can be seen to occur in a simplified model in which the air is treated as if it is single uniform layer exchanging radiation with the ground and space. Slightly more complex models add additional layers, or introduce convection. Equivalent emission altitude One simplification is to treat all outgoing longwave radiation as being emitted from an altitude where the air temperature equals the overall effective temperature for planetary emissions, T e f f {\displaystyle T_{\mathrm {eff} }} . Some authors have referred to this altitude as the effective radiating level (ERL), and suggest that as the CO2 concentration increases, the ERL must rise to maintain the same mass of CO2 above that level.This approach is less accurate than accounting for variation in radiation wavelength by emission altitude. However, it can be useful in supporting a simplified understanding of the greenhouse effect. For instance, it can be used to explain how the greenhouse effect increases as the concentration of greenhouse gases increase.Earth's overall equivalent emission altitude has been increasing with a trend of 23 m (75 ft)/decade, which is said to be consistent with a global mean surface warming of 0.12 °C (0.22 °F)/decade over the period 1979–2011. Effect of lapse rate Lapse rate In the lower portion of the atmosphere, the troposphere, the air temperature decreases (or "lapses") with increasing altitude. The rate at which temperature changes with altitude is called the lapse rate.On Earth, the air temperature decreases by about 6.5°C/km (3.6°F per 1000 ft), on average, although this varies.The temperature lapse is caused by convection. Air warmed by the surface rises. As it rises, air expands and cools. Simultaneously, other air descends, compresses, and warms. This process creates a vertical temperature gradient within the atmosphere.This vertical temperature gradient is essential to the greenhouse effect. If the lapse rate was zero (so that the atmospheric temperature did not vary with altitude and was the same as the surface temperature) then there would be no greenhouse effect (i.e., its value would be zero). Emission temperature and altitude Greenhouse gases make the atmosphere near Earth's surface mostly opaque to longwave radiation. The atmosphere only becomes transparent to longwave radiation at higher altitudes, where the air is less dense, there is less water vapor, and reduced pressure broadening of absorption lines limits the wavelengths that gas molecules can absorb.For any given wavelength, the longwave radiation that reaches space is emitted by a particular radiating layer of the atmosphere. The intensity of the emitted radiation is determined by the weighted average air temperature within that layer. So, for any given wavelength of radiation emitted to space, there is an associated effective emission temperature (or brightness temperature).A given wavelength of radiation may also be said to have an effective emission altitude, which is a weighted average of the altitudes within the radiating layer. The effective emission temperature and altitude vary by wavelength (or frequency). This phenomenon may be seen by examining plots of radiation emitted to space. Greenhouse gases and the lapse rate Earth's surface radiates longwave radiation with wavelengths in the range of 4–100 microns. Greenhouse gases that were largely transparent to incoming solar radiation are more absorbent for some wavelengths in this range.The atmosphere near the Earth's surface is largely opaque to longwave radiation and most heat loss from the surface is by evaporation and convection. However radiative energy losses become increasingly important higher in the atmosphere, largely because of the decreasing concentration of water vapor, an important greenhouse gas. Rather than thinking of longwave radiation headed to space as coming from the surface itself, it is more realistic to think of this outgoing radiation as being emitted by a layer in the mid-troposphere, which is effectively coupled to the surface by a lapse rate. The difference in temperature between these two locations explains the difference between surface emissions and emissions to space, i.e., it explains the greenhouse effect. Greenhouse gases A greenhouse gas (GHG) is a gas which contributes to the trapping of heat by impeding the flow of longwave radiation out of a planet's atmosphere. Greenhouse gases contribute most of the greenhouse effect in Earth's energy budget. Infrared active gases Gases which can absorb and emit longwave radiation are said to be infrared active and act as greenhouse gases. Most gases whose molecules have two different atoms (such as carbon monoxide, CO), and all gases with three or more atoms (including H2O and CO2), are infrared active and act as greenhouse gases. (Technically, this is because when these molecules vibrate, those vibrations modify the molecular dipole moment, or asymmetry in the distribution of electrical charge. See Infrared spectroscopy.)Gases with only one atom (such as argon, Ar) or with two identical atoms (such as nitrogen, N2, and oxygen, O2) are not infrared active. They are transparent to longwave radiation, and, for practical purposes, do not absorb or emit longwave radiation. (This is because their molecules are symmetrical and so do not have a dipole moment.) Such gases make up more than 99% of the dry atmosphere. Absorption and emission Greenhouse gases absorb and emit longwave radiation within specific ranges of wavelengths (organized as spectral lines or bands).When greenhouse gases absorb radiation, they distribute the acquired energy to the surrounding air as thermal energy (i.e., kinetic energy of gas molecules). Energy is transferred from greenhouse gas molecules to other molecules via molecular collisions.Contrary to what is sometimes said, greenhouse gases do not "re-emit" photons after they are absorbed. Because each molecule experiences billions of collisions per second, any energy a greenhouse gas molecule receives by absorbing a photon will be redistributed to other molecules before there is a chance for a new photon to be emitted.In a separate process, greenhouse gases emit longwave radiation, at a rate determined by the air temperature. This thermal energy is either absorbed by other greenhouse gas molecules or leaves the atmosphere, cooling it. Contributions of different gases By their percentage contribution to the overall greenhouse effect on Earth, the four major greenhouse gases are: Water vapor (H2O), 36~72% (~75% including clouds); Carbon dioxide (CO2), 9~26%; Methane (CH4), 4~9%; Tropospheric ozone (O3), 3~7%.It is not practical to assign a specific percentage to each gas because the absorption and emission bands of the gases overlap (hence the ranges given above). A water molecule only stays in the atmosphere for an average 8 to 10 days, which corresponds with high variability in the contribution from clouds and humidity at any particular time and location.: 1–41 There are other influential gases that contribute to the greenhouse effect, including nitrous oxide (N2O), perfluorocarbons (PFCs), chlorofluorocarbons (CFCs), hydrofluorocarbons (HFCs), and sulfur hexafluoride (SF6).: AVII-60  These gases are mostly produced through human activities, thus they have played important parts in climate change. Concentration changes The concentration of a greenhouse gas is typically measured in parts per million (ppm) or parts per billion (ppb) by volume. A CO2 concentration of 420 ppm means that 420 out of every million air molecules is a CO2 molecule. Greenhouse gas concentrations changed as follows from 1750 to 2019: Carbon dioxide (CO2), 278.3 to 409.9 ppm, up 47%; Methane (CH4), 729.2 to 1866.3 ppb, up 156%; Nitrous oxide (N2O), 270.1 to 332.1 ppb, up 23%. Radiative effects Effect on air: Air is warmed by latent heat (buoyant water vapor condensing into water droplets and releasing heat), thermals (warm air rising from below), and by sunlight being absorbed in the atmosphere. Air is cooled radiatively, by greenhouse gases and clouds emitting longwave thermal radiation. Within the troposphere, greenhouse gases typically have a net cooling effect on air, emitting more thermal radiation than they absorb. Warming and cooling of air are well balanced, on average, so that the atmosphere maintains a roughly stable average temperature.: 139 Effect on surface cooling: Longwave radiation flows both upward and downward due to absorption and emission in the atmosphere. These canceling energy flows reduce radiative surface cooling (net upward radiative energy flow). Latent heat transport and thermals provide non-radiative surface cooling which partially compensates for this reduction, but there is still a net reduction in surface cooling, for a given surface temperature.: 139 Effect on TOA energy balance: Greenhouse gases impact the top-of-atmosphere (TOA) energy budget by reducing the flux of longwave radiation emitted to space, for a given surface temperature. Thus, greenhouse gases alter the energy balance at TOA. This means that the surface temperature needs to be higher (than the planet's effective temperature, i.e., the temperature associated with emissions to space), in order for the outgoing energy emitted to space to balance the incoming energy from sunlight.: 139  It is important to focus on the top-of-atmosphere (TOA) energy budget (rather than the surface energy budget) when reasoning about the warming effect of greenhouse gases.: 414 Clouds and aerosols Clouds and aerosols have both cooling effects, associated with reflecting sunlight back to space, and warming effects, associated with trapping thermal radiation. Clouds On average, clouds have a strong net cooling effect. However, the mix of cooling and warming effects varies, depending on detailed characteristics of particular clouds (including their type, height, and optical properties). Thin cirrus clouds can have a net warming effect. Clouds can absorb and emit infrared radiation and thus affect the radiative properties of the atmosphere.Clouds include liquid clouds, mixed-phase clouds and ice clouds. Liquid clouds are low clouds and have negative radiative forcing. Mixed-phase clouds are clouds coexisted with both liquid water and solid ice at subfreezing temperatures and their radiative properties (optical depth or optical thickness) are substantially influenced by the liquid content. Ice clouds are high clouds and their radiative forcing depends on the ice crystal number concentration, cloud thickness and ice water content.The radiative properties of liquid clouds depend strongly on cloud microphysical properties, such as cloud liquid water content and cloud drop size distribution. The liquid clouds with higher liquid water content and smaller water droplets will have a stronger negative radiative forcing. The cloud liquid contents are usually related to the surface and atmospheric circulations. Over the warm ocean, the atmosphere is usually rich with water vapor and thus the liquid clouds contain higher liquid water content. When the moist air flows converge in the clouds and generate strong updrafts, the water content can be much higher. Aerosols will influence the cloud drop size distribution. For example, in the polluted industrial regions with lots of aerosols, the water droplets in liquid clouds are often small.The mixed phase clouds have negative radiative forcing. The radiative forcing of mix-phase clouds has a larger uncertainty than liquid clouds. One reason is that the microphysics are much more complicated because the coexistence of both liquid and solid water. For example, Wegener–Bergeron–Findeisen process can deplete large amounts of water droplets and enlarge small ice crystals to large ones in a short period of time. Hallett-Mossop process will shatter the liquid droplets in the collision with large ice crystals and freeze into a lot of small ice splinters. The cloud radiative properties can change dramatically during these processes because small ice crystals can reflect much more sun lights and generate larger negative radiative forcing, compared with large water droplets.Cirrus clouds can either enhance or reduce the greenhouse effects, depending on the cloud thickness. Thin cirrus is usually considered to have positive radiative forcing and thick cirrus has negative radiative forcing. Ice water content and ice size distribution also determines cirrus radiative properties. The larger ice water content is, the more cooling effects cirrus have. When cloud ice water contents are the same, cirrus with more smaller ice crystals have larger cooling effects, compared with cirrus with fewer larger ice crystals. Aerosols There are two major sources of atmospheric aerosols, natural sources, and anthropogenic sources. Natural sources of aerosols include desert dust, sea salt, volcanic ash, volatile organic compounds (VOC) from vegetation and smoke from forest fires. Aerosols generated from human activities include fossil fuel burning, deforestation fires, and burning of agricultural waste. The amount of anthropogenic aerosols has been dramatically increased since preindustrial times, which is considered as a major contribution to the global air pollution. Since these aerosols have different chemical compositions and physical properties, they can produce different radiative forcing effects, to warm or cool the global climate. The impact of atmospheric aerosols on climate can be classified as direct or indirect with respect to radiative forcing of the climate system. Aerosols can directly scatter and absorb solar and infrared radiance in the atmosphere, hence it has a direct radiative forcing to the global climate system. Aerosols can also act as cloud condensation nuclei (CCN) to form clouds, resulting in changing the formation and precipitation efficiency of liquid water, ice and mixed phase clouds, thereby causing an indirect radiative forcing associated with these changes in cloud properties.Aerosols that mainly scatter solar radiation can reflect solar radiation back to space, which will cool the global climate. All of the atmospheric aerosols have the capability to scatter incoming solar radiation, but only a few types of aerosols can absorb solar radiation. These include black carbon (BC), organic carbon (OC) and mineral dust, which can induce non-negligible warming effects. The emission of black carbon is significant in developing countries, such as China and India. Black carbon can be transported over long distances, and mixed with other aerosols along the way. Solar-absorption efficiency has a positive correlation with the ratio of black carbon to sulphate.Particle size and mixing ratio can not only determine the absorption efficiency of BC, but also affect the lifetime of BC. The surface albedo of snow and ice can be reduced due to the deposition of absorbing aerosols, which will also cause heating effects. The heating effects of black carbon at high elevations can be as important as carbon dioxide in the melting of snowpacks and glaciers. In addition to these absorbing aerosols, it is found that the stratospheric aerosols can also induce local warming by increasing longwave radiation reaching the surface and reducing outgoing longwave radiation. Role in climate change Strengthening of the greenhouse effect through human activities is known as the enhanced (or anthropogenic) greenhouse effect. As well as being inferred from measurements by ARGO, CERES and other instruments throughout the 21st century,: 7–17  this increase in radiative forcing from human activity has been observed directly, and is attributable mainly to increased atmospheric carbon dioxide levels. According to the 2014 Assessment Report from the Intergovernmental Panel on Climate Change, "atmospheric concentrations of carbon dioxide, methane and nitrous oxide are unprecedented in at least the last 800,000 years. Their effects, together with those of other anthropogenic drivers, have been detected throughout the climate system and are extremely likely to have been the dominant cause of the observed warming since the mid-20th century'". CO2 is produced by fossil fuel burning and other activities such as cement production and tropical deforestation. Measurements of CO2 from the Mauna Loa Observatory show that concentrations have increased from about 313 parts per million (ppm) in 1960, passing the 400 ppm milestone in 2013. The current observed amount of CO2 exceeds the geological record maxima (≈300 ppm) from ice core data. The effect of combustion-produced carbon dioxide on the global climate, a special case of the greenhouse effect first described in 1896 by Svante Arrhenius, has also been called the Callendar effect. Over the past 800,000 years, ice core data shows that carbon dioxide has varied from values as low as 180 ppm to the pre-industrial level of 270 ppm. Paleoclimatologists consider variations in carbon dioxide concentration to be a fundamental factor influencing climate variations over this time scale. Basic formulas Effective temperature A given flux of thermal radiation has an associated effective radiating temperature or effective temperature. Effective temperature is the temperature that a black body (a perfect absorber/emitter) would need to be to emit that much thermal radiation. Thus, the overall effective temperature of a planet is given by T e f f = ( O L R / σ ) 1 / 4 {\displaystyle T_{\mathrm {eff} }=(\mathrm {OLR} /\sigma )^{1/4}} where OLR is the average flux (power per unit area) of outgoing longwave radiation emitted to space and σ {\displaystyle \sigma } is the Stefan-Boltzmann constant. Similarly, the effective temperature of the surface is given by T s u r f a c e , e f f = ( S L R / σ ) 1 / 4 {\displaystyle T_{\mathrm {surface,eff} }=(\mathrm {SLR} /\sigma )^{1/4}} where SLR is the average flux of longwave radiation emitted by the surface. (OLR is a conventional abbreviation. SLR is used here to denote the flux of surface-emitted longwave radiation, although there is no standard abbreviation for this.) Metrics for the greenhouse effect The IPCC reports the greenhouse effect, G, as being 159 W m-2, where G is the flux of longwave thermal radiation that leaves the surface minus the flux of outgoing longwave radiation that reaches space:: 968  G = S L R − O L R . {\displaystyle G=\mathrm {SLR} -\mathrm {OLR} \;.} Alternatively, the greenhouse effect can be described using the normalized greenhouse effect, g̃, defined as g ~ = G / S L R = 1 − O L R / S L R . {\displaystyle {\tilde {g}}=G/\mathrm {SLR} =1-\mathrm {OLR} /\mathrm {SLR} \;.} The normalized greenhouse effect is the fraction of the amount of thermal radiation emitted by the surface that does not reach space. Based on the IPCC numbers, g̃ = 0.40. In other words, 40 percent less thermal radiation reaches space than what leaves the surface.: 968 Sometimes the greenhouse effect is quantified as a temperature difference. This temperature difference is closely related to the quantities above. When the greenhouse effect is expressed as a temperature difference, Δ T G H E {\displaystyle \Delta T_{\mathrm {GHE} }} , this refers to the effective temperature associated with thermal radiation emissions from the surface minus the effective temperature associated with emissions to space: Δ T G H E = T s u r f a c e , e f f − T e f f {\displaystyle \Delta T_{\mathrm {GHE} }=T_{\mathrm {surface,eff} }-T_{\mathrm {eff} }} Δ T G H E = ( S L R / σ ) 1 / 4 − ( O L R / σ ) 1 / 4 {\displaystyle \Delta T_{\mathrm {GHE} }=\left(\mathrm {SLR} /\sigma \right)^{1/4}-\left(\mathrm {OLR} /\sigma \right)^{1/4}} Informal discussions of the greenhouse effect often compare the actual surface temperature to the temperature that the planet would have if there were no greenhouse gases. However, in formal technical discussions, when the size of the greenhouse effect is quantified as a temperature, this is generally done using the above formula. The formula refers to the effective surface temperature rather than the actual surface temperature, and compares the surface with the top of the atmosphere, rather than comparing reality to a hypothetical situation.The temperature difference, Δ T G H E {\displaystyle \Delta T_{\mathrm {GHE} }} , indicates how much warmer a planet's surface is than the planet's overall effective temperature. Radiative balance Earth's top-of-atmosphere (TOA) energy imbalance (EEI) is the amount by which the power of incoming radiation exceeds the power of outgoing radiation: E E I = A S R − O L R {\displaystyle \mathrm {EEI} =\mathrm {ASR} -\mathrm {OLR} } where ASR is the mean flux of absorbed solar radiation. ASR may be expanded as A S R = ( 1 − A ) M S I {\displaystyle \mathrm {ASR} =(1-A)\,\mathrm {MSI} } where A {\displaystyle A} is the albedo (reflectivity) of the planet and MSI is the mean solar irradiance incoming at the top of the atmosphere. The radiative equilibrium temperature of a planet can be expressed as T r a d e q = ( A S R / σ ) 1 / 4 = [ ( 1 − A ) M S I / σ ] 1 / 4 . {\displaystyle T_{\mathrm {radeq} }=(\mathrm {ASR} /\sigma )^{1/4}=\left[(1-A)\,\mathrm {MSI} /\sigma \right]^{1/4}\;.} A planet's temperature will tend to shift towards a state of radiative equilibrium, in which the TOA energy imbalance is zero, i.e., E E I = 0 {\displaystyle \mathrm {EEI} =0} . When the planet is in radiative equilibrium, the overall effective temperature of the planet is given by T e f f = T r a d e q . {\displaystyle T_{\mathrm {eff} }=T_{\mathrm {radeq} }\;.} Thus, the concept of radiative equilibrium is important because it indicates what effective temperature a planet will tend towards having.If, in addition to knowing the effective temperature, T e f f {\displaystyle T_{\mathrm {eff} }} , we know the value of the greenhouse effect, then we know the mean (average) surface temperature of the planet. This is why the quantity known as the greenhouse effect is important: it is one of the few quantities that go into determining the planet's mean surface temperature. Greenhouse effect and temperature Typically, a planet will be close to radiative equilibrium, with the rates of incoming and outgoing energy being well-balanced. Under such conditions, the planet's equilibrium temperature is determined by the mean solar irradiance and the planetary albedo (how much sunlight is reflected back to space instead of being absorbed). The greenhouse effect measures how much warmer the surface is than the overall effective temperature of the planet. So, the effective surface temperature, T s u r f a c e , e f f {\displaystyle T_{\mathrm {surface,eff} }} , is, using the definition of Δ T G H E {\displaystyle \Delta T_{\mathrm {GHE} }} , T s u r f a c e , e f f = T e f f + Δ T G H E . {\displaystyle T_{\mathrm {surface,eff} }=T_{\mathrm {eff} }+\Delta T_{\mathrm {GHE} }\;.} One could also express the relationship between T s u r f a c e , e f f {\displaystyle T_{\mathrm {surface,eff} }} and T e f f {\displaystyle T_{\mathrm {eff} }} using G or g̃. So, the principle that a larger greenhouse effect corresponds to a higher surface temperature, if everything else (i.e., the factors that determine T e f f {\displaystyle T_{\mathrm {eff} }} ) is held fixed, is true as a matter of definition. Note that the greenhouse effect influences the temperature of the planet as a whole, in tandem with the planet's tendency to move toward radiative equilibrium. Bodies other than Earth In the solar system, apart from the Earth, at least two other planets and a moon also have a greenhouse effect. Venus The greenhouse effect on Venus is particularly large, and it brings the surface temperature to as high as 735 K (462 °C; 863 °F). This is due to its very dense atmosphere which consists of about 97% carbon dioxide.Although Venus is about 30% closer to the Sun, it absorbs (and is warmed by) less sunlight than Earth, because Venus reflects 77% of incident sunlight while Earth reflects around 30%. In the absence of a greenhouse effect, the surface of Venus would be expected to have a temperature of 232 K (−41 °C; −42 °F). Thus, contrary to what one might think, being nearer to the Sun is not a reason why Venus is warmer than Earth.Due to its high pressure, the CO2 in the atmosphere of Venus exhibits continuum absorption (absorption over a broad range of wavelengths) and is not limited to absorption within the bands relevant to its absorption on Earth. Mars Mars has about 70 times as much carbon dioxide as Earth, but experiences only a small greenhouse effect, about 6 K (11 °F). The greenhouse effect is small due to the lack of water vapor and the overall thinness of the atmosphere.The same radiative transfer calculations that predict warming on Earth accurately explain the temperature on Mars, given its atmospheric composition. Titan Saturn's moon Titan has both a greenhouse effect and an anti-greenhouse effect. The presence of nitrogen (N2), methane (CH4), and hydrogen (H2) in the atmosphere contribute to a greenhouse effect, increasing the surface temperature by 21 K (38 °F) over the expected temperature of the body without these gases.While the gases N2 and H2 ordinarily do not absorb infrared radiation, these gases absorb thermal radiation on Titan due to pressure-induced collisions, the large mass and thickness of the atmosphere, and the long wavelengths of the thermal radiation from the cold surface.The existence of a high-altitude haze, which absorbs wavelengths of solar radiation but is transparent to infrared, contribute to an anti-greenhouse effect of approximately 9 K (16 °F).The net result of these two effects is a warming of 21 K − 9 K = 12 K (22 °F), so Titan's surface temperature of 94 K (−179 °C; −290 °F) is 12 K warmer than it would be if there were no atmosphere. Effect of pressure One cannot predict the relative sizes of the greenhouse effects on different bodies simply by comparing the amount of greenhouse gases in their atmospheres. This is because factors other than the quantity of these gases also play a role in determining the size of the greenhouse effect. Overall atmospheric pressure affects how much thermal radiation each molecule of a greenhouse gas can absorb. High pressure leads to more absorption and low pressure leads to less.This is due to "pressure broadening" of spectral lines. When the total atmospheric pressure is higher, collisions between molecules occur at a higher rate. Collisions broaden the width of absorption lines, allowing a greenhouse gas to absorb thermal radiation over a broader range of wavelengths.: 226 Each molecule in the air near Earth's surface experiences about 7 billion collisions per second. This rate is lower at higher altitudes, where the pressure and temperature are both lower. This means that greenhouse gases are able to absorb more wavelengths in the lower atmosphere than they can in the upper atmosphere.On other planets, pressure broadening means that each molecule of a greenhouse gas is more effective at trapping thermal radiation if the total atmospheric pressure is high (as on Venus), and less effective at trapping thermal radiation if the atmospheric pressure is low (as on Mars). Misconceptions There are sometimes misunderstandings about how the greenhouse effect functions and raises temperatures. The surface budget fallacy is a common error in thinking.: 413  It involves thinking that an increased CO2 concentration could only cause warming by increasing the downward thermal radiation to the surface, as a result of making the atmosphere a better emitter. If the atmosphere near the surface is already nearly opaque to thermal radiation, this would mean that increasing CO2 could not lead to higher temperatures. However, it is a mistake to focus on the surface energy budget rather than the top-of-atmosphere energy budget. Regardless of what happens at the surface, increasing the concentration of CO2 tends to reduce the thermal radiation reaching space (OLR), leading to a TOA energy imbalance that leads to warming. Earlier researchers like Callendar (1938) and Plass (1959) focused on the surface budget, but the work of Manabe in the 1960s clarified the importance of the top-of-atmosphere energy budget.: 414 Among those who do not believe in the greenhouse effect, there is a fallacy that the greenhouse effect involves greenhouse gases sending heat from the cool atmosphere to the planet's warm surface, in violation of the Second Law of Thermodynamics. However, this idea reflects a misunderstanding. Radiation heat flow is the net energy flow after the flows of radiation in both directions have been taken into account. Radiation heat flow occurs in the direction from the surface to the atmosphere and space, as is to be expected given that the surface is warmer than the atmosphere and space. While greenhouse gases emit thermal radiation downward to the surface, this is part of the normal process of radiation heat transfer. The downward thermal radiation simply reduces the upward thermal radiation net energy flow (radiation heat flow), i.e., it reduces cooling. Related effects Negative greenhouse effect The greenhouse effect involves greenhouse gases reducing the rate of radiative cooling to space, relative to what would happen if those gases were not present. This occurs because greenhouse gases block the outflow of radiative heat at low altitudes, but emit thermal radiation at high altitudes where the air is cooler and thermal radiation rates are lower. In a location where there is a strong temperature inversion, so that the air is warmer than the surface, it is possible for this effect to be reversed, so that the presence of greenhouse gases increases the rate of radiative cooling to space. In this case, the rate of thermal radiation emission to space is greater than the rate at which thermal radiation is emitted by the surface. Thus, the local value of the greenhouse effect is negative.Recent studies have shown that, at times, there is a negative greenhouse effect over parts of Antarctica. Anti-greenhouse effect The anti-greenhouse effect is a mechanism similar and symmetrical to the greenhouse effect: in the greenhouse effect: the lower atmosphere absorbs thermal radiation while being relatively transparent to sunlight; is cooler at the top than at the bottom; consequently emits less thermal radiation at the top of the atmosphere relative to what is emitted by the surface; which results in the surface being warmer than the effective temperature associated with emissions from the top of the atmosphere.In the anti-greenhouse effect: an upper layer of the atmosphere absorbs sunlight while being relatively transparent to thermal radiation; that layer is warmer at the top than at the bottom; consequently, the net thermal radiation emitted to space is larger than the amount of thermal radiation emitted by lower layers of the atmosphere; which results in the surface being cooler than it would be if an equal amount of sunlight was absorbed but not by that upper layer.This effect has been discovered to exist on Saturn's moon Titan. Runaway greenhouse effect A runaway greenhouse effect occurs when greenhouse gases accumulate in the atmosphere through a positive feedback cycle to such an extent that they substantially block radiated heat from escaping into space, thus greatly increasing the temperature of the planet.A runaway greenhouse effect involving carbon dioxide and water vapor has for many years been hypothesized to have occurred on Venus; this idea is still largely accepted. The planet Venus experienced a runaway greenhouse effect, resulting in an atmosphere which is 96% carbon dioxide, and a surface atmospheric pressure roughly the same as found 900 m (3,000 ft) underwater on Earth. Venus may have had water oceans, but they would have boiled off as the mean surface temperature rose to the current 735 K (462 °C; 863 °F).A 2012 journal article stated that almost all lines of evidence indicate that is unlikely to be possible to trigger a full runaway greenhouse on Earth, merely by adding greenhouse gases to the atmosphere. However, the authors cautioned that "our understanding of the dynamics, thermodynamics, radiative transfer and cloud physics of hot and steamy atmospheres is weak", and that we "cannot therefore completely rule out the possibility that human actions might cause a transition, if not to full runaway, then at least to a much warmer climate state than the present one". A 2013 article concluded that runaway greenhouse "could in theory be triggered by increased greenhouse forcing", but that "anthropogenic emissions are probably insufficient".Earth is expected to experience a runaway greenhouse effect "in about 2 billion years as solar luminosity increases". See also Top contributors to greenhouse gas emissions Lapse rate Climate change feedback Tipping points in the climate system Global dimming Solar radiation management == References ==
gas stove
A gas stove is a stove that is fuelled by combustible gas such as syngas, natural gas, propane, butane, liquefied petroleum gas or other flammable gas. Before the advent of gas, cooking stoves relied on solid fuels such as coal or wood. The first gas stoves were developed in the 1820s and a gas stove factory was established in England in 1836. This new cooking technology had the advantage of being easily adjustable and could be turned off when not in use. The gas stove, however, did not become a commercial success until the 1880s, by which time supplies of piped gas were available in cities and large towns in Britain. The stoves became widespread on the European Continent and in the United States in the early 20th century. Gas stoves became more common when the oven was integrated into the base and the size was reduced to better fit in with the rest of the kitchen furniture. By the 1910s, producers started to enamel their gas stoves for easier cleaning. Ignition of the gas was originally by match and this was followed by the more convenient pilot light. This had the disadvantage of continually consuming gas. The oven still needed to be lit by match and accidentally turning on the gas without igniting it could lead to an explosion. To prevent these types of accidents, oven manufacturers developed and installed a safety valve called a flame failure device for gas hobs (cooktops) and ovens. Most modern gas stoves have electronic ignition, automatic timers for the oven and extractor hoods to remove fumes. Gas stoves are an indoor common fossil-fuel appliance that contribute significant levels of indoor air pollution, so require good ventilation to maintain acceptable air quality. They also expose users to pollutants, such as nitrogen dioxide, which can trigger respiratory diseases, and have shown an increase in the rates of asthma in children. Gas stoves also release methane. Research in 2022 estimated that the methane emissions from gas stoves in the United States were equivalent to the greenhouse gas emissions of 500,000 cars. About 80% of methane emissions were found to occur even when stoves are turned off, as the result of tiny leaks in gas lines and fittings. Although methane contains less carbon than other fuels, gas venting and unintended fugitive emissions throughout the supply chain results in natural gas having a similar carbon footprint to other fossil fuels overall. In June 2023, Stanford researchers found combustion from gas stoves can raise indoor levels of benzene, a potent carcinogen linked to a higher risk of blood cell cancers, to more than that found in secondhand tobacco smoke. History The first gas stove was developed in 1802 by Zachäus Winzler (de), but this along with other attempts remained isolated experiments. James Sharp patented a gas stove in Northampton, England in 1826 and opened a gas stove factory in 1836. His invention was marketed by the firm Smith & Philips from 1828. An important figure in the early acceptance of this new technology, was Alexis Soyer, the renowned chef at the Reform Club in London. From 1841, he converted his kitchen to consume piped gas, arguing that gas was cheaper overall because the supply could be turned off when the stove was not in use.A gas stove was shown at the Great Exhibition in London in 1851, but it was only in the 1880s that the technology became a commercial success in England. By that stage a large and reliable network for gas pipeline transport had spread over much of the country, making gas relatively cheap and efficient for domestic use. Gas stoves only became widespread on the European Continent and in the United States in the early 20th century. Early gas stoves were rather unwieldy, but soon the oven was integrated into the base and the size was reduced to fit in better with the rest of the kitchen furniture. By the early 1920s, gas stoves with enameled porcelain finishes for easier cleaning had become widely available, along with heavy use of insulation for fuel-efficiency.In the 1960s the American Gas Association ran an advertising campaign to promote gas stoves while also downplaying science showing their health risks, mirroring the tobacco industry playbook of creating uncertainty. Ignition Gas stoves today use two basic types of ignition sources, standing pilot and electric. A stove with a standing pilot has a small, continuously burning gas flame (called a pilot light) under the cooktop. The flame is between the front and back burners. When the stove is turned on, this flame lights the gas flowing out of the burners. The advantage of the standing pilot system is that it is simple and completely independent of any outside power source. A minor drawback is that the flames continuously consume fuel even when the stove is not in use. Early gas ovens did not have a pilot. One had to light these manually with a match. If one accidentally left the gas on, gas would fill the oven and eventually the room. A small spark, such as an arc from a light switch being turned on, could ignite the gas, triggering a violent explosion. To prevent these types of accidents, oven manufacturers developed and installed a safety valve called a flame failure device for gas hobs (cooktops) and ovens. The safety valve depends on a thermocouple that sends a signal to the valve to stay open. Although most modern gas stoves have electronic ignition, many households have gas cooking ranges and ovens that need to be lit with a flame. Electric ignition stoves use electric sparks to ignite the surface burners. This is the "clicking sound" audible just before the burner actually lights. The sparks are initiated by turning the gas burner knob to a position typically labeled "LITE" or by pressing the 'ignition' button. Once the burner lights, the knob is turned further to modulate the flame size. Auto reignition is an elegant refinement: the user need not know or understand the wait-then-turn sequence. They simply turn the burner knob to the desired flame size and the sparking is turned off automatically when the flame lights. Auto reignition also provides a safety feature: the flame will be automatically reignited if the flame goes out while the gas is still on—for example by a gust of wind. If the power fails, surface burners must be manually match-lit. Electric ignition for ovens uses a "hot surface" or "glow bar" ignitor. Basically it is a heating element that heats up to gas's ignition temperature. A sensor detects when the glow bar is hot enough and opens the gas valve. Also stoves with electric ignition must be connected with gas protection mechanisms such as gas control breaker. Because of this many manufacturers supply stoves without electricity plug. Features Burner heat One of the important properties of a gas stove is the heat emitted by the burners. Burner heat is typically specified in terms of kilowatts or British Thermal Units per hour and is directly based on the gas consumption rather than heat absorbed by pans. Often, a gas stove will have burners with different heat output ratings. For example, a gas cooktop may have a high output burner, often in the range 3 to 6 kilowatts (10,000 to 20,000 BTU/h), and a mixture of medium output burners, 1.5 to 3 kW, and low output burners, 1 kW or less. The high output burner is suitable for boiling a large pot of water quickly, sautéing and searing, while the low output burners are good for simmering. Mean benzene emissions from gas and propane burners on high and ovens set to 350 °F ranged from 2.8 to 6.5 μg min–1, 10 to 25 times higher than emissions from electric coil and radiant alternatives.Some high-end cooktop models provide higher range of heat and heavy-duty burners that can go up to 6 kilowatts (20,000 BTU/h) or even more. These may be desired for preparing large quantities or special types of food and enable certain advanced cooking techniques. However, these burners produce greater emissions and necessitate better ventilation for safe operation. Higher capacity burners may not benefit every potential user or dish. Design and layout In the last few years, appliance manufacturers have been making innovative changes to the design and layout of gas stoves. Most of the modern cooktops have come with lattice structure which usually covers the complete range of the top, enabling sliding of cookware from one burner to another without lifting the containers over the gaps of cooktop. Some modern gas stoves also have central fifth burner or an integrated griddle in between the outer burners. Size The size of a kitchen gas stove usually ranges from 50 to 150 centimetres (20 to 60 in). Almost all the manufacturers have been developing several range of options in size range. Combination of range and oven are also available which usually come in two styles: slide in and freestanding. Usually, there isn't much of a style difference in between them. Slide-in come with lips on their either side and controls over the front along with burner controls. Freestanding gas range cooktops have solid slides and controls placed behind the cooktop. Oven Many stoves have integrated ovens. Modern ovens often include a convection fan inside the oven to provide even air circulation and let the food cook evenly. Some modern ovens come with temperature sensors which allows close control of baking, automatically shut off after reaching certain temperature, or hold on to particular temperature through the cooking process. Ovens may also have two separate oven bays which allows cooking of two different dishes at the same time. Programmable controls Many gas stoves come with at least few modern programmable controls to make the handling easier. LCD displays and some other complex cooking routines are some of the standard features present in most of the basic and high-end manufacturing models. Some of the other programmable controls include precise pre-heating, automatic pizza, cook timers and others. Safety factors Modern gas stove ranges are safer than older models. Two of the major safety concerns with gas stoves are child-safe controls and accidental ignition. Some gas cooktops have knobs which can be accidentally switched on even with a gentle bump. Gas stoves are at risk of overheating when frying oil, raising the oil temperature to the auto-ignition point and creating an oil fire on the stove. Japan, South Korea and China have regulated the addition of electronic safety devices to prevent pan overheating. The devices use a thermistor to monitor the temperature close to the pan, and cut off the gas supply if the heat is too high. Fire loss statistics for Japanese gas stoves showed a reduction in house fires caused by gas stoves in the years following 2008, when the safety devices were mandated. Efficiency The U.S. Department of Energy (DOE) ran tests in 2014 of cooktop energy transfer efficiency, simulating cooking while testing what percentage of a cooktop's energy is transferred to a test block. Gas had an efficiency of 43.9%, with ±0.5% repeatability in the measurement. This level of efficiency is only possible if the pan is big enough for the burner.Japanese gas flames are angled upwards towards the pot to increase efficiency. The efficiency of gas appliances can be raised by using special pots with heatsink-like fins. Jetboil manufactures pots for portable stoves that use a corrugated ribbon to increase efficiency. Health concerns Carbon monoxide, formaldehyde, benzene and nitrogen dioxide from gas stoves contribute to indoor air pollution. Nitrogen dioxide can exacerbate respiratory illnesses, such as asthma or chronic obstructive pulmonary disease. Studies have been performed correlating childhood asthma and gas stoves. A 1999–2004 study published in The Lancet Respiratory Medicine found "no evidence of an association between the use of gas as a cooking fuel and either asthma symptoms or asthma diagnosis". A 2013 meta-analysis concluded that gas cooking increases the risk of asthma in children. A 2020 Lancet systematic review surveyed 31 studies on gas cooking or heating, finding a pooled risk ratio of 1.17 for asthma. One study found that in households with gas stoves those that report using ventilation had lower rates of asthma than those that did not. A 2023 meta-analysis estimated that in the United States, one in eight cases of asthma in children are due to pollution from gas stoves. The asthma risk caused by gas stove exposure is similar in magnitude to that caused by secondhand smoke from tobacco. Stoves can cause levels of nitrogen dioxide that can exceed outdoor safety standards. A 2020 RMI report found pollution from gas stoves causes exacerbation of asthma symptoms in children.People interact more directly with their stove than with other gas appliances, increasing potential exposure to any natural gas constituents and compounds formed during combustion, including formaldehyde (CH2O) carbon monoxide (CO), and nitrogen oxides (NOx). Among all gas appliances, the stove is unique in that the byproducts of combustion are emitted directly into home air with no requirement for venting the exhaust outdoors. Cooking, especially high heat frying, releases smoke (measured as fine particulate matter), acrolein and polycyclic aromatic hydrocarbons. Mitigating indoor particulate pollution can involve running a range hood, opening a kitchen window, and running an air purifier. Range hoods are more effective at capturing and removing pollution on the rear burners than the front burners. California requires gas stoves to have higher levels of ventilation than electric stoves due to the nitrogen dioxide risk. Range hoods can be run for 15 minutes after cooking to reduce pollution. The U.S. Consumer Product Safety Commission is investigating reducing the health effects of gas stoves, including emissions and ventilation standards.A 2023 study found benzene, a known carcinogen, accumulated in homes to unhealthy levels when natural gas or propane stoves were used, especially when vent hoods were not used. The Stanford researchers determined benzene is emitted from the cooking gas, not the food being cooked. Benzene exposure causes both cancer and noncancerous health effects. Shorter-term benzene exposure suppresses blood cell production, and chronic benzene exposure increases the risk of leukemias and lymphomas. A 2002 study of pipelines in Boston found that natural gas contains non-methane impurities including heptane, hexane, cyclohexane, benzene and toluene.After health concerns about gas stoves became more prominent in the 2020s and American localities regulated additions of gas stoves to new buildings, the Republican Party in the United States pushed legislative bills to "save gas stoves". In June 2023, a bill in the Republican-controlled House of Representatives narrowly failed as a dozen Republican legislators voted against the bill due to a disagreement with the Republican leadership on unrelated issues. Climate impact Gas stoves are often run on natural gas. The extraction and consumption of natural gas is a major and growing contributor to climate change. Both the gas itself (specifically methane) and carbon dioxide, which is released when natural gas is burned, are greenhouse gases. In 2022, a research group investigated leakage in 53 homes in California and estimated the methane emissions from gas stoves in the United States were equivalent over a 20 year period to the greenhouse gas emissions of 500,000 cars. About 80% of methane emissions occur when stoves are turned off, as the result of leaks in gas lines and fittings.Some places, such as the Australian Capital Territory and New York State, have curtailed installation of gas stoves and appliances in new construction, for reasons of health, indoor air quality, and climate protection. As of 2023, the legality of gas stove bans in the United States is the subject of active lawsuits.Many electrification codes exempt commercial kitchens. See also Auto reignition Electric stove List of stoves References External links Media related to Gas stoves at Wikimedia Commons "Gas Stoves: The Fracking Tailpipe in Your Kitchen". The Science and Environmental Health Network. 19 January 2023. Retrieved 2023-01-23.
shale gas
Shale gas is an unconventional natural gas that is found trapped within shale formations. Since the 1990s a combination of horizontal drilling and hydraulic fracturing has made large volumes of shale gas more economical to produce, and some analysts expect that shale gas will greatly expand worldwide energy supply.Shale gas has become an increasingly important source of natural gas in the United States since the start of this century, and interest has spread to potential gas shales in the rest of the world. China is estimated to have the world's largest shale gas reserves.A 2013 review by the United Kingdom Department of Energy and Climate Change noted that most studies of the subject have estimated that life-cycle greenhouse gas (GHG) emissions from shale gas are similar to those of conventional natural gas, and are much less than those from coal, usually about half the greenhouse gas emissions of coal; the noted exception was a 2011 study by Howarth and others of Cornell University, which concluded that shale GHG emissions were as high as those of coal. More recent studies have also concluded that life-cycle shale gas GHG emissions are much less than those of coal, among them, studies by Natural Resources Canada (2012), and a consortium formed by the US National Renewable Energy Laboratory with a number of universities (2012).Some 2011 studies pointed to high rates of decline of some shale gas wells as an indication that shale gas production may ultimately be much lower than is currently projected. But shale-gas discoveries are also opening up substantial new resources of tight oil, also known as "shale oil". History United States Shale gas was first extracted as a resource in Fredonia, New York, in 1821, in shallow, low-pressure fractures. Horizontal drilling began in the 1930s, and in 1947 a well was first fracked in the U.S.Federal price controls on natural gas led to shortages in the 1970s. Faced with declining natural gas production, the federal government invested in many supply alternatives, including the Eastern Gas Shales Project, which lasted from 1976 to 1992, and the annual FERC-approved research budget of the Gas Research Institute, where the federal government began extensive research funding in 1982, disseminating the results to industry. The federal government also provided tax credits and rules benefiting the industry in the 1980 Energy Act. The Department of Energy later partnered with private gas companies to complete the first successful air-drilled multi-fracture horizontal well in shale in 1986. The federal government further incentivized drilling in shale via the Section 29 tax credit for unconventional gas from 1980–2000. Microseismic imaging, a crucial input to both hydraulic fracturing in shale and offshore oil drilling, originated from coalbeds research at Sandia National Laboratories. The DOE program also applied two technologies that had been developed previously by industry, massive hydraulic fracturing and horizontal drilling, to shale gas formations, which led to microseismic imaging. Although the Eastern Gas Shales Project had increased gas production in the Appalachian and Michigan basins, shale gas was still widely seen as marginal to uneconomic without tax credits, and shale gas provided only 1.6% of US gas production in 2000, when the federal tax credits expired.George P. Mitchell is regarded as the father of the shale gas industry, since he made it commercially viable in the Barnett Shale by getting costs down to $4 per 1 million British thermal units (1,100 megajoules). Mitchell Energy achieved the first economical shale fracture in 1998 using slick-water fracturing. Since then, natural gas from shale has been the fastest growing contributor to total primary energy in the United States, and has led many other countries to pursue shale deposits. According to the IEA, shale gas could increase technically recoverable natural gas resources by almost 50%.In 2000 shale gas provided only 1% of U.S. natural gas production; by 2010 it was over 20% and the U.S. Energy Information Administration predicted that by 2035, 46% of the United States' natural gas supply will come from shale gas.The Obama administration believed that increased shale gas development would help reduce greenhouse gas emissions. Geology Because shales ordinarily have insufficient permeability to allow significant fluid flow to a wellbore, most shales are not commercial sources of natural gas. Shale gas is one of a number of unconventional sources of natural gas; others include coalbed methane, tight sandstones, and methane hydrates. Shale gas areas are often known as resource plays (as opposed to exploration plays). The geological risk of not finding gas is low in resource plays, but the potential profits per successful well are usually also lower.Shale has low matrix permeability, and so gas production in commercial quantities requires fractures to provide permeability. Shale gas has been produced for years from shales with natural fractures; the shale gas boom in recent years has been due to modern technology in hydraulic fracturing (fracking) to create extensive artificial fractures around well bores.Horizontal drilling is often used with shale gas wells, with lateral lengths up to 10,000 feet (3,000 m) within the shale, to create maximum borehole surface area in contact with the shale.Shales that host economic quantities of gas have a number of common properties. They are rich in organic material (0.5% to 25%), and are usually mature petroleum source rocks in the thermogenic gas window, where high heat and pressure have converted petroleum to natural gas. They are sufficiently brittle and rigid enough to maintain open fractures.Some of the gas produced is held in natural fractures, some in pore spaces, and some is adsorbed onto the shale matrix. Further, the adsorption of gas is a process of physisorption, exothermic and spontaneous. The gas in the fractures is produced immediately; the gas adsorbed onto organic material is released as the formation pressure is drawn down by the well. Shale gas by country Although the shale gas potential of many nations is being studied, as of 2013, only the US, Canada, and China produce shale gas in commercial quantities, and only the US and Canada have significant shale gas production. While China has ambitious plans to dramatically increase its shale gas production, these efforts have been checked by inadequate access to technology, water, and land.The table below is based on data collected by the Energy Information Administration agency of the United States Department of Energy. Numbers for the estimated amount of "technically recoverable" shale gas resources are provided alongside numbers for proven natural gas reserves. The US EIA had made an earlier estimate of total recoverable shale gas in various countries in 2011, which for some countries differed significantly from the 2013 estimates. The total recoverable shale gas in the United States, which was estimated at 862 trillion cubic feet in 2011, was revised downward to 665 trillion cubic feet in 2013. Recoverable shale gas in Canada, which was estimated to be 388 TCF in 2011, was revised upward to 573 TCF in 2013. For the United States, EIA estimated (2013) a total "wet natural gas" resource of 2,431 tcf, including both shale and conventional gas. Shale gas was estimated to be 27% of the total resource. "Wet natural gas" is methane plus natural gas liquids, and is more valuable than dry gas.For the rest of the world (excluding US), EIA estimated (2013) a total wet natural gas resource of 20,451 trillion cubic feet (579.1×10^12 m3). Shale gas was estimated to be 32% of the total resource.Europe has a shale gas resource estimate of 639 trillion cubic feet (18.1×10^12 m3) compared with America's reserves 862 trillion cubic feet (24.4×10^12 m3), but its geology is more complicated and the oil and gas more expensive to extract, with a well likely to cost as much as three-and-a-half times more than one in the United States. Europe would be the fastest growing region, accounting for the highest CAGR of 59.5%, in terms of volume owing to availability of shale gas resource estimates in more than 14 European countries. Environment The extraction and use of shale gas can affect the environment through the leaking of extraction chemicals and waste into water supplies, the leaking of greenhouse gases during extraction, and the pollution caused by the improper processing of natural gas. A challenge to preventing pollution is that shale gas extractions varies widely in this regard, even between different wells in the same project; the processes that reduce pollution sufficiently in one extraction may not be enough in another.In 2013 the European Parliament agreed that environmental impact assessments will not be mandatory for shale gas exploration activities and shale gas extraction activities will be subject to the same terms as other gas extraction projects. Climate Barack Obama's administration had sometimes promoted shale gas, in part because of its belief that it releases fewer greenhouse gas (GHG) emissions than other fossil fuels. In a 2010 letter to President Obama, Martin Apple of the Council of Scientific Society Presidents cautioned against a national policy of developing shale gas without a more certain scientific basis for the policy. This umbrella organization that represents 1.4 million scientists noted that shale gas development "may have greater GHG emissions and environmental costs than previously appreciated."In late 2010, the U.S. Environmental Protection Agency issued a report which concluded that shale gas emits larger amounts of methane, a potent greenhouse gas, than does conventional gas, but still far less than coal. Methane is a powerful greenhouse gas, although it stays in the atmosphere for only one tenth as long a period as carbon dioxide. Recent evidence suggests that methane has a global warming potential (GWP) that is 105-fold greater than carbon dioxide when viewed over a 20-year period and 33-fold greater when viewed over a 100-year period, compared mass-to-mass.Several studies which have estimated lifecycle methane leakage from shale gas development and production have found a wide range of leakage rates, from less than 1% of total production to nearly 8%.A 2011 study published in Climatic Change Letters claimed that the production of electricity using shale gas may lead to as much or more life-cycle GWP than electricity generated with oil or coal. In the peer-reviewed paper, Cornell University professor Robert W. Howarth, a marine ecologist, and colleagues claimed that once methane leak and venting impacts are included, the life-cycle greenhouse gas footprint of shale gas is far worse than those of coal and fuel oil when viewed for the integrated 20-year period after emission. On the 100-year integrated time frame, this analysis claims shale gas is comparable to coal and worse than fuel oil. However, other studies have pointed out flaws with the paper and come to different conclusions. Among those are assessments by experts at the U.S. Department of Energy, peer-reviewed studies by Carnegie Mellon University and the University of Maryland, and the Natural Resources Defense Council, which claimed that the Howarth et al. paper's use of a 20-year time horizon for global warming potential of methane is "too short a period to be appropriate for policy analysis." In January 2012, Howarth's colleagues at Cornell University, Lawrence Cathles et al., responded with their own peer-reviewed assessment, noting that the Howarth paper was "seriously flawed" because it "significantly overestimate[s] the fugitive emissions associated with unconventional gas extraction, undervalue[s] the contribution of 'green technologies' to reducing those emissions to a level approaching that of conventional gas, base[s] their comparison between gas and coal on heat rather than electricity generation (almost the sole use of coal), and assume[s] a time interval over which to compute the relative climate impact of gas compared to coal that does not capture the contrast between the long residence time of CO2 and the short residence time of methane in the atmosphere." The author of that response, Lawrence Cathles, wrote that "shale gas has a GHG footprint that is half and perhaps a third that of coal," based upon "more reasonable leakage rates and bases of comparison."In April 2013 the U.S. Environmental Protection Agency lowered its estimate of how much methane leaks from wells, pipelines and other facilities during production and delivery of natural gas by 20 percent. The EPA report on greenhouse emissions credited tighter pollution controls instituted by the industry for cutting an average of 41.6 million metric tons of methane emissions annually from 1990 through 2010, a reduction of more than 850 million metric tons overall. The Associated Press noted that "The EPA revisions came even though natural gas production has grown by nearly 40 percent since 1990."Using data from the Environmental Protection Agency's 2013 Greenhouse Gas Inventory yields a methane leakage rate of about 1.4%, down from 2.3% from the EPA's previous Inventory. Life cycle comparison for more than global warming potential A 2014 study from Manchester University presented the "First full life cycle assessment of shale gas used for electricity generation." By full life cycle assessment, the authors explained that they mean the evaluation of nine environmental factors beyond the commonly performed evaluation of global warming potential. The authors concluded that, in line with most of the published studies for other regions, that shale gas in the United Kingdom would have a global warming potential "broadly similar" to that of conventional North Sea gas, although shale gas has the potential to be higher if fugitive methane emissions are not controlled, or if per-well ultimate recoveries in the UK are small. For the other parameters, the highlighted conclusions were that, for shale gas in the United Kingdom in comparison with coal, conventional and liquefied gas, nuclear, wind and solar (PV). Shale gas worse than coal for three impacts and better than renewables for four. It has higher photochemical smog and terrestrial toxicity than the other options. Shale gas a sound environmental option only if accompanied by stringent regulation.Dr James Verdon has published a critique of the data produced, and the variables that may affect the results. Water and air quality Chemicals are added to the water to facilitate the underground fracturing process that releases natural gas. Fracturing fluid is primarily water and approximately 0.5% chemical additives (friction reducer, agents countering rust, agents killing microorganism). Since (depending on the size of the area) millions of liters of water are used, this means that hundreds of thousands of liters of chemicals are often injected into the subsurface. About 50% to 70% of the injected volume of contaminated water is recovered and stored in above-ground ponds to await removal by tanker. The remaining volume remains in the subsurface. Hydraulic fracturing opponents fear that it can lead to contamination of groundwater aquifers, though the industry deems this "highly unlikely". However, foul-smelling odors and heavy metals contaminating the local water supply above-ground have been reported.Besides using water and industrial chemicals, it is also possible to frack shale gas with only liquified propane gas. This reduces the environmental degradation considerably. The method was invented by GasFrac, of Alberta, Canada.Hydraulic fracturing was exempted from the Safe Drinking Water Act in the Energy Policy Act of 2005.A study published in May 2011 concluded that shale gas wells have seriously contaminated shallow groundwater supplies in northeastern Pennsylvania with flammable methane. However, the study does not discuss how pervasive such contamination might be in other areas drilled for shale gas.The United States Environmental Protection Agency (EPA) announced 23 June 2011 that it will examine claims of water pollution related to hydraulic fracturing in Texas, North Dakota, Pennsylvania, Colorado and Louisiana. On 8 December 2011, the EPA issued a draft finding which stated that groundwater contamination in Pavillion, Wyoming may be the result of fracking in the area. The EPA stated that the finding was specific to the Pavillion area, where the fracking techniques differ from those used in other parts of the U.S. Doug Hock, a spokesman for the company which owns the Pavillion gas field, said that it is unclear whether the contamination came from the fracking process. Wyoming's Governor Matt Mead called the EPA draft report "scientifically questionable" and stressed the need for additional testing. The Casper Star-Tribune also reported on 27 December 2011, that the EPA's sampling and testing procedures "didn’t follow their own protocol" according to Mike Purcell, the director of the Wyoming Water Development Commission.A 2011 study by the Massachusetts Institute of Technology concluded that "The environmental impacts of shale development are challenging but manageable." The study addressed groundwater contamination, noting "There has been concern that these fractures can also penetrate shallow freshwater zones and contaminate them with fracturing fluid, but there is no evidence that this is occurring". This study blames known instances of methane contamination on a small number of sub-standard operations, and encourages the use of industry best practices to prevent such events from recurring.In a report dated 25 July 2012, the U.S. Environmental Protection Agency announced that it had completed its testing of private drinking water wells in Dimock, Pennsylvania. Data previously supplied to the agency by residents, the Pennsylvania Department of Environmental Protection, and Cabot Oil and Gas Exploration had indicated levels of arsenic, barium or manganese in well water at five homes at levels that could present a health concern. In response, water treatment systems that can reduce concentrations of those hazardous substances to acceptable levels at the tap were installed at affected homes. Based on the outcome of sampling after the treatment systems were installed, the EPA concluded that additional action by the Agency was not required. A Duke University study of Blacklick Creek (Pennsylvania), carried out over two years, took samples from the creek upstream and down stream of the discharge point of Josephine Brine Treatment Facility. Radium levels in the sediment at the discharge point are around 200 times the amount upstream of the facility. The radium levels are "above regulated levels" and present the "danger of slow bio-accumulation" eventually in fish. The Duke study "is the first to use isotope hydrology to connect the dots between shale gas waste, treatment sites and discharge into drinking water supplies." The study recommended "independent monitoring and regulation" in the United States due to perceived deficiencies in self-regulation. What is happening is the direct result of a lack of any regulation. If the Clean Water Act was applied in 2005 when the shale gas boom started this would have been prevented. In the UK, if shale gas is going to develop, it should not follow the American example and should impose environmental regulation to prevent this kind of radioactive buildup. According to the US Environmental Protection Agency, the Clean Water Act applies to surface stream discharges from shale gas wells: "6) Does the Clean Water Act apply to discharges from Marcellus Shale Drilling operations? Yes. Natural gas drilling can result in discharges to surface waters. The discharge of this water is subject to requirements under the Clean Water Act (CWA)." Earthquakes Hydraulic fracturing routinely produces microseismic events much too small to be detected except by sensitive instruments. These microseismic events are often used to map the horizontal and vertical extent of the fracturing. However, as of late 2012, there have been three known instances worldwide of hydraulic fracturing, through induced seismicity, triggering quakes large enough to be felt by people.On 26 April 2012, the Asahi Shimbun reported that United States Geological Survey scientists have been investigating the recent increase in the number of magnitude 3 and greater earthquake in the midcontinent of the United States. Beginning in 2001, the average number of earthquakes occurring per year of magnitude 3 or greater increased significantly, culminating in a six-fold increase in 2011 over 20th century levels. A researcher in Center for Earthquake Research and Information of University of Memphis assumes water pushed back into the fault tends to cause earthquake by slippage of fault.Over 109 small earthquakes (Mw 0.4–3.9) were detected during January 2011 to February 2012 in the Youngstown, Ohio area, where there were no known earthquakes in the past. These shocks were close to a deep fluid injection well. The 14 month seismicity included six felt earthquakes and culminated with a Mw 3.9 shock on 31 December 2011. Among the 109 shocks, 12 events greater than Mw 1.8 were detected by regional network and accurately relocated, whereas 97 small earthquakes (0.4<Mw<1.8) were detected by the waveform correlation detector. Accurately located earthquakes were along a subsurface fault trending ENE-WSW—consistent with the focal mechanism of the main shock and occurred at depths 3.5–4.0 km in the Precambrian basement. On 19 June 2012, the United States Senate Committee on Energy & Natural Resources held a hearing entitled, "Induced Seismicity Potential in Energy Technologies." Dr. Murray Hitzman, the Charles F. Fogarty Professor of Economic Geology in the Department of Geology and Geological Engineering at the Colorado School of Mines in Golden, CO testified that "About 35,000 hydraulically fractured shale gas wells exist in the United States. Only one case of felt seismicity in the United States has been described in which hydraulic fracturing for shale gas development is suspected, but not confirmed. Globally only one case of felt induced seismicity at Blackpool, England has been confirmed as being caused by hydraulic fracturing for shale gas development." The relative impacts of natural gas and coal Human health impacts A comprehensive review of the public health effects of energy fuel cycles in Europe finds that coal causes 6 to 98 deaths per TWh (average 25 deaths per TWh), compared to natural gas’ 1 to 11 deaths per TWh (average 3 deaths per TWh). These numbers include both accidental deaths and pollution-related deaths. Coal mining is one of the most dangerous professions in the United States, resulting in between 20 and 40 deaths annually, compared to between 10 and 20 for oil and gas extraction. Worker accident risk is also far higher with coal than gas. In the United States, the oil and gas extraction industry is associated with one to two injuries per 100 workers each year. Coal mining, on the other hand, contributes to four injuries per 100 workers each year. Coal mines collapse, and can take down roads, water and gas lines, buildings and many lives with them.Average damages from coal pollutants are two orders of magnitude larger than damages from natural gas. SO2, NOx, and particulate matter from coal plants create annual damages of $156 million per plant compared to $1.5 million per gas plant. Coal-fired power plants in the United States emit 17–40 times more SOx emissions per MWh than natural gas, and 1–17 times as much NOx per MWh. Lifecycle CO2 emissions from coal plants are 1.8-2.3 times greater (per KWh) than natural gas emissions.The air quality advantages of natural gas over coal have been borne out in Pennsylvania, according to studies by the RAND Corporation and the Pennsylvania Department of Environmental Protection. The shale boom in Pennsylvania has led to dramatically lower emissions of sulfur dioxide, fine particulates, and volatile organic compounds (VOCs).Physicist Richard A. Muller has said that the public health benefits from shale gas, by displacing harmful air pollution from coal, far outweigh its environmental costs. In a 2013 report for the Centre for Policy Studies, Muller wrote that air pollution, mostly from coal burning, kills over three million people each year, primarily in the developing world. The report states that "Environmentalists who oppose the development of shale gas and fracking are making a tragic mistake." In China, shale gas development is seen as a way to shift away from coal and decrease serious air pollution problems created by burning coal. Social impacts Shale gas development leads to a series of tiered socio-economic effects during boom conditions. These include both positive and negative aspects. Along with other forms of unconventional energy, shale oil and gas extraction has three direct initial aspects: increased labour demand (employment); income generation (higher wages); and disturbance to land and/or other economic activity, potentially resulting in compensation. Following these primary direct effects, the following secondary effects occur: in-migration (to meet labour demand), attracting temporary and/or permanent residents, Increased demand for goods and services; leading to increased indirect employment. The latter two of these can fuel each other in a circular relationship during boom conditions (i.e. increased demand for goods and services creates employment which increase demand for goods and services). These increases place strain on existing infrastructure. These conditions lead to tertiary socio-economic effects in the form of increased housing values; increased rental costs; construction of new dwellings (which may take time to be completed); demographic and cultural changes as new types of people move to the host region; changes to income distribution; potential for conflict; potential for increased substance abuse; and provision of new types of services. The reverse of these effects occurs over bust conditions, with a decline in primary effects leading to a decline in secondary effects and so on. However, the bust period of unconventional extraction may not be as severe as from conventional energy extraction. Due to the dispersed nature of the industry and ability to adjust drilling rates, there is debate in the literature as to how intense the bust phase is and how host communities can maintain social resilience during downturns. Landscape impacts Coal mining radically alters whole mountain and forest landscapes. Beyond the coal removed from the earth, large areas of forest are turned inside out and blackened with toxic and radioactive chemicals. There have been reclamation successes, but hundreds of thousands of acres of abandoned surface mines in the United States have not been reclaimed, and reclamation of certain terrain (including steep terrain) is nearly impossible.Where coal exploration requires altering landscapes far beyond the area where the coal is, aboveground natural gas equipment takes up just one percent of the total surface land area from where gas will be extracted. The environmental impact of gas drilling has changed radically in recent years. Vertical wells into conventional formations used to take up one-fifth of the surface area above the resource, a twenty-fold higher impact than current horizontal drilling requires. A six-acre horizontal drill pad can thus extract gas from an underground area 1,000 acres in size. The impact of natural gas on landscapes is even less and shorter in duration than the impact of wind turbines. The footprint of a shale gas derrick (3–5 acres) is only a little larger than the land area necessary for a single wind turbine. But it requires less concrete, stands one-third as tall, and is present for just 30 days instead of 20–30 years. Between 7 and 15 weeks are spent setting up the drill pad and completing the actual hydraulic fracture. At that point, the drill pad is removed, leaving behind a single garage-sized wellhead that remains for the lifetime of the well. A study published in 2015 on the Fayetteville Shale found that a mature gas field impacted about 2% of the land area and substantially increased edge habitat creation. Average land impact per well was 3 hectares (about 7 acres) Water With coal mining, waste materials are piled at the surface of the mine, creating aboveground runoff that pollutes and alters the flow of regional streams. As rain percolates through waste piles, soluble components are dissolved in the runoff and cause elevated total dissolved solids (TDS) levels in local water bodies. Sulfates, calcium, carbonates and bicarbonates – the typical runoff products of coalmine waste materials – make water unusable for industry or agriculture and undrinkable for humans. Acid mine wastewater can drain into groundwater, causing significant contamination. Explosive blasting in a mine can cause groundwater to seep to lower-than-normal depths or connect two aquifers that were previously distinct, exposing both to contamination by mercury, lead, and other toxic heavy metals. Contamination of surface waterways and groundwater with fracking fluids is problematic. Shale gas deposits are generally several thousand feet below ground. There have been instances of methane migration, improper treatment of recovered wastewater, and pollution via reinjection wells.In most cases, the life-cycle water intensity and pollution associated with coal production and combustion far outweigh those related to shale gas production. Coal resource production requires at least twice as much water per million British thermal units compared to shale gas production. And while regions like Pennsylvania have experienced an absolute increase in water demand for energy production thanks to the shale boom, shale wells actually produce less than half the wastewater per unit of energy compared to conventional natural gas.Coal-fired power plants consume two to five times as much water as natural gas plants. Where 520–1040 gallons of water are required per MWh of coal, gas-fired combined cycle power requires 130–500 gallons per MWh. The environmental impact of water consumption at the point of power generation depends on the type of power plant: plants either use evaporative cooling towers to release excess heat or discharge water to nearby rivers. Natural gas combined-cycle power (NGCC), which captures the exhaust heat generated by combusting natural gas to power a steam generator, are considered the most efficient large-scale thermal power plants. One study found that the life-cycle demand for water from coal power in Texas could be more than halved by switching the fleet to NGCC.All told, shale gas development in the United States represents less than half a percent of total domestic freshwater consumption, although this portion can reach as high as 25 percent in particularly arid regions. Hazards Drilling depths of 1,000 to 3,000 m, then injection of a fluid composed of water, sand and detergents under pressure (600 bar), are required to fracture the rock and release the gas. These operations have already caused groundwater contaminations across the Atlantic, mainly as a result of hydrocarbon leakage along the casings. In addition, between 2% and 8% of the extracted fuel would be released to the atmosphere at wells (still in the United States). However, it is mainly composed of methane (CH4), a greenhouse gas that is considerably more powerful than CO2. Surface installations must be based on concrete or paved soils connected to the road network. A gas pipeline is also required to evacuate production. In total, each farm would occupy an average area of 3.6 ha. However, the gas fields are relatively small. Exploitation of shale gas could therefore lead to fragmentation of landscapes. Finally, a borehole requires about 20 million liters of water, the daily consumption of about 100,000 inhabitants. Economics Although shale gas has been produced for more than 100 years in the Appalachian Basin and the Illinois Basin of the United States, the wells were often marginally economic. Advances in hydraulic fracturing and horizontal completions have made shale-gas wells more profitable. Improvements in moving drilling rigs between nearby locations, and the use of single well pads for multiple wells have increased the productivity of drilling shale gas wells. As of June 2011, the validity of the claims of economic viability of these wells has begun to be publicly questioned. Shale gas tends to cost more to produce than gas from conventional wells, because of the expense of the massive hydraulic fracturing treatments required to produce shale gas, and of horizontal drilling.The cost of extracting offshore shale gas in the UK were estimated to be more than $200 per barrel of oil equivalent (UK North Sea oil prices were about $120 per barrel in April 2012). However, no cost figures were made public for onshore shale gas.North America has been the leader in developing and producing shale gas. The economic success of the Barnett Shale play in Texas in particular has spurred the search for other sources of shale gas across the United States and Canada,Some Texas residents think fracking is using too much of their groundwater, but drought and other growing uses are also part of the causes of the water shortage there.A Visiongain research report calculated the 2011 worth of the global shale-gas market as $26.66 billion.A 2011 New York Times investigation of industrial emails and internal documents found that the financial benefits of unconventional shale gas extraction may be less than previously thought, due to companies intentionally overstating the productivity of their wells and the size of their reserves. The article was criticized by, among others, the New York Times' own Public Editor for lack of balance in omitting facts and viewpoints favorable to shale gas production and economics.In first quarter 2012, the United States imported 840 billion cubic feet (Bcf) (785 from Canada) while exporting 400 Bcf (mostly to Canada); both mainly by pipeline. Almost none is exported by ship as LNG, as that would require expensive facilities. In 2012, prices went down to US$3 per million British thermal units ($10/MWh) due to shale gas.A recent academic paper on the economic impacts of shale gas development in the US finds that natural gas prices have dropped dramatically in places with shale deposits with active exploration. Natural gas for industrial use has become cheaper by around 30% compared to the rest of the US. This stimulates local energy intensive manufacturing growth, but brings the lack of adequate pipeline capacity in the US in sharp relief.One of the byproducts of shale gas exploration is the opening up of deep underground shale deposits to "tight oil" or shale oil production. By 2035, shale oil production could "boost the world economy by up to $2.7 trillion, a PricewaterhouseCoopers (PwC) report says. It has the potential to reach up to 12 percent of the world’s total oil production — touching 14 million barrels a day — "revolutionizing" the global energy markets over the next few decades." According to a 2013 Forbes magazine article, generating electricity by burning natural gas is cheaper than burning coal if the price of gas remains below US$3 per million British thermal units ($10/MWh) or about $3 per 1000 cubic feet. Also in 2013, Ken Medlock, Senior Director of the Baker Institute's Center for Energy Studies, researched US shale gas break-even prices. "Some wells are profitable at $2.65 per thousand cubic feet, others need $8.10…the median is $4.85," Medlock said. Energy consultant Euan Mearns estimates that, for the US, "minimum costs [are] in the range $4 to $6 / mcf. [per 1000 cubic feet or million BTU]." See also Unconventional (oil & gas) reservoir Biogas Oil sands Peak oil Tight gas Fracking Underground coal gasification References Further reading Gamper-Rabindran, Shanti, ed. The Shale Dilemma: A Global Perspective on Fracking and Shale Development (U of Pittsburgh Press, 2018) online review External links Unconventional Gas and Implications for the LNG Market by Christopher Gascoyne and Alexis Aik. This is a working paper written for the 2011 Pacific Energy Summit hosted by the National Bureau of Asian Research. The Shale Gas Boom: The global implications of the rise of unconventional fossil energy, FIIA Briefing Paper 122, 20 March 2013, The Finnish Institute of International Affairs. A Comparison between Shale Gas in China and Unconventional Fuel Development in the United States: Health, Water and Environmental Risks by Paolo Farah and Riccardo Tremolada. This is a paper presented at the Colloquium on Environmental Scholarship 2013 hosted by Vermont Law School (11 October 2013) Map of Assessed Shale Gas in the United States, 2012 United States Geological Survey Registry will Help Study Health Impact from Living Near Shale Gas Wells, Birth Defect Research for Children Newsletter, May 2017
air pollution in india
Air pollution in India is a serious environmental issue. Of the 30 most polluted cities in the world, 21 were in India in 2019. As per a study based on 2016 data, at least 140 million people in India breathe air that is 10 times or more over the WHO safe limit and 13 of the world's 20 cities with the highest annual levels of air pollution are in India. 51% of the pollution is caused by industrial pollution, 27% by vehicles, 17% by crop burning and 5% by other sources. Air pollution contributes to the premature deaths of 2 million Indians every year. Emissions come from vehicles and industry, whereas in rural areas, much of the pollution stems from biomass burning for cooking and keeping warm. In autumn and spring months, large scale crop residue burning in agriculture fields – a cheaper alternative to mechanical tilling – is a major source of smoke, smog and particulate pollution. India has a low per capita emissions of greenhouse gases but the country as a whole is the third largest greenhouse gas producer after China and the United States. A 2013 study on non-smokers has found that Indians have 30% weaker lung function than Europeans.The Air (Prevention and Control of Pollution) Act was passed in 1981 to regulate air pollution but has failed to reduce pollution because of poor enforcement of the rules.In 2015, Government of India, together with IIT Kanpur launched the National Air Quality Index. In 2019, India launched 'The National Clean Air Programme' with tentative national target of 20%-30% reduction in PM2.5 and PM10 concentrations by 2024, considering 2017 as the base year for comparison. It will be rolled out in 102 cities that are considered to have air quality worse than the National Ambient Air Quality Standards. There are other initiatives such as a 1,600-kilometre-long and 5-kilometre-wide The Great Green Wall of Aravalli green ecological corridor along Aravalli range from Gujarat to Delhi which will also connect to Shivalik hill range with planting of 1.35 billion (135 crore) new native trees over 10 years to combat the pollution. In December 2019, IIT Bombay, in partnership with the McKelvey School of Engineering of Washington University in St. Louis, launched the Aerosol and Air Quality Research Facility to study air pollution in India. According to a Lancet study, nearly 1.67 million deaths and an estimated loss of USD 28.8 billion worth of output were India's prices for worsening air pollution in 2019. Causes Fuel and biomass burning Fuel wood and biomass burning is the primary reason for near-permanent haze and smoke observed above rural and urban India, and in satellite pictures of the country. Fuelwood and biomass cakes are used for cooking and general heating needs. These are burnt in cook stoves known as chulha (also chullha or chullah) in some parts of India. These cook stoves are present in over 100 million Indian households, and are used two to three times a day, daily. Some reports, including one by the World Health Organization, claim 300,000 to 400,000 people die of indoor air pollution and carbon monoxide poisoning in India because of biomass burning and use of chullhas. The carbon containing gases released from biomass fuels are many times more reactive than cleaner fuels such as liquefied petroleum gas. Air pollution is also the main cause of the Asian brown cloud, which is delaying the start of the monsoon. The Burning of biomass and firewood will not stop until electricity or clean burning fuel and combustion technologies become reliably available and widely adopted in rural and urban India. India is the world's largest consumer of fuelwood, agricultural waste and biomass for energy purposes. From the most recent available nationwide study, India used 148.7 million tonnes coal replacement worth of fuel-wood and biomass annually for domestic energy use. India's national average annual per capita consumption of fuel wood, agricultural waste and biomass cakes was 206 kilogram coal equivalent. The overall contribution of fuelwood, including sawdust and wood waste, was about 46% of the total, the rest being agricultural waste and biomass dung cakes. Traditional fuel (fuelwood, crop residue and dung cake) dominates domestic energy use in rural India and accounts for about 90% of the total. In urban areas, this traditional fuel constitutes about 24% of the total. India burns tenfold more fuelwood every year than the United States; the fuelwood quality in India is different from the dry firewood of the United States; and, the Indian stoves in use are less efficient, thereby producing more smoke and air pollutants per kilogram equivalent. The unsanctioned tyre pyrolysis plants, which recycle rubber tyres into low-grade oil and carbon black are widespread in India and contribute to severe air pollution and health problems. Fuel adulteration Some Indian taxis and auto-rickshaws run on adulterated fuel blends. Adulteration of gasoline and diesel with lower-priced fuels is common in South Asia, including India. Some adulterants increase emissions of harmful pollutants from vehicles, worsening urban air pollution. Financial incentives arising from differential taxes are generally the primary cause of fuel adulteration. In India and other developing countries, gasoline carries a much higher tax than diesel, which in turn is taxed more than kerosene meant as a cooking fuel, while some solvents and lubricants carry little or no tax. As fuel prices rise, the public transport driver cuts costs by blending the cheaper hydrocarbon into highly taxed hydrocarbon. The blending may be as much as 20–30 percent. For a low wage driver, the adulteration can yield short term savings that are significant over the month. The consequences to long term air pollution, quality of life and effect on health are simply ignored. Also ignored are the reduced life of vehicle engine and higher maintenance costs, particularly if the taxi, auto-rickshaw or truck is being rented for a daily fee. Adulterated fuel increases tailpipe emissions of hydrocarbons (HC), carbon monoxide (CO), oxides of nitrogen (NOx) and particulate matter (PM). Air toxin emissions — which fall into the category of unregulated emissions — of primary concern are benzene and polyaromatic hydrocarbons (PAHs), both well-known carcinogens. Kerosene is more difficult to burn than gasoline, its addition results in higher levels of HC, CO and PM emissions even from catalyst-equipped cars. The higher sulfur level of kerosene is another issue. Traffic congestion Traffic congestion is severe in India's cities and towns. Traffic congestion is caused by several reasons, some of which are: increase in number of vehicles per kilometre of available roads, a lack of intra-city divided-lane highways and intra-city expressways networks, lack of inter-city expressways, traffic accidents and chaos due to poor enforcement of traffic laws. Traffic congestion reduces the average traffic speed. At low speeds, scientific studies reveal that vehicles burn fuel inefficiently and pollute more per trip. For example, a study in the United States found that for the same trip, cars consumed more fuel and polluted more if the traffic was congested, than when traffic flowed freely. An average trip speeds between 20 and 40 kilometres per hour, the cars pollutant emission was twice as much as when the average speed was 55 to 75 kilometres per hour. At average trip speeds between 5 and 20 kilometres per hour, the cars pollutant emissions were 4 to 8 times as much as when the average speed was 55 to 70 kilometres per hour. Fuel efficiencies similarly were much worse with traffic congestion. Traffic gridlock in Delhi and other Indian cities is extreme. This has been shown to result in a build up of local pollution, particularly under stagnant conditions. The average trip speed on many Indian city roads is less than 20 kilometres per hour; a 10-kilometre trip can take 30 minutes, or more. At such speeds, vehicles in India emit air pollutants 4 to 8 times more than they would with less traffic congestion; Indian vehicles also consume a lot more carbon footprint fuel per trip, than they would if the traffic congestion was less. Emissions of particles and heavy metals increase over time because the growth of the fleet and mileage outpaces the efforts to curb emissions.In cities like Bangalore, around 50% of children suffer from asthma. Greenhouse gas emissions Effects Health costs of air pollution The most important reason for concern over the worsening air pollution in the country is its effect on the health of individuals. Exposure to particulate matter for a long time can lead to respiratory and cardiovascular diseases such as asthma, bronchitis, COPD, lung cancer and heart attack. The Global Burden of Disease Study for 2010, published in 2013, had found that outdoor air pollution was the fifth-largest killer in India and around 620,000 early deaths occurred from air pollution-related diseases in 2010. According to a WHO study, 13 of the 20 most-polluted cities in the world are in India; however, the accuracy and methodology of the WHO study was questioned by the Government of India. India also has one of the highest number of COPD patients and the highest number of deaths due to COPD. Over a million Indians die prematurely every year due to air pollution, according to the non-profit Health Effects Institute. Over two million children—half the children in Delhi—have abnormalities in their lung function, according to the Delhi Heart and Lung Institute. Over the past decade air pollution has increased in India significantly. Asthma is the most common health problem faced by Indians and it accounts for more than half of the health issues caused by air pollution. Air pollution is believed to be one of the key factors in accelerating the onset of Alzheimer’s disease in India.The Global Burden of Disease Study of 2017 analysed in a report by The Lancet indicated that 76.8% of Indians are exposed to higher ambient particulate matter over 40 μg/m3, which is significantly above the national limit recommenced by national guidelines on ambient air pollution. The study estimated that of 480.7 million Disability-Adjusted Life Years in India 4.4% of could be ascribed to ambient particulate matter pollution and 15.8 million of them were the result of polluted air in households. In terms of average life expectancy it is suggested that average life expectancy in India would increase by 1.7 years if exposure was limited to national minimum recommendations.Ambient air pollution in India is estimated to cause 670,000 deaths annually and particularly aggravates respiratory and cardiovascular conditions including chronic bronchitis, lung cancer and asthma. Ambient air pollution is linked to an increase in hospital visits, with a higher concentration of outdoor pollution particulates resulting in emergency room visit increases of between 20 and 25% for a range of conditions associated with higher exposure to air pollution. Approximately 76% of households in rural India are reliant on solid biomass for cooking purposes which contributes further to the disease burden of ambient air pollution experienced by the population of India. State-Wide Trends According to the WHO, India has 14 out of the 15 most polluted cities in the world in terms of PM 2.5 concentrations. Other Indian cities that registered very high levels of PM2.5 pollutants are Delhi, Patna, Agra, Muzaffarpur, Srinagar, Gurgaon, Jaipur, Patiala and Jodhpur, followed by Ali Subah Al-Salem in Kuwait and a few cities in China and Mongolia.Air Quality Index (AQI) is a number used to communicate the level of pollution in the air and it essentially tells you the level of pollution in the air in a given city on a given day. The AQI of Delhi was placed under the "severe-plus category" when it touched 574, by the System of Air Quality and Weather Forecasting And Research. In May 2014 the World Health Organization announced New Delhi as the most polluted city in the world. In November 2016, the Great smog of Delhi was an environmental event which saw New Delhi and adjoining areas in a dense blanket of smog, which was the worst in 17 years. India's Central Pollution Control Board now routinely monitors four air pollutants namely sulphur dioxide (SO2), oxides of nitrogen (NOx), suspended particulate matter (SPM) and respirable particulate matter (PM10). These are target air pollutants for regular monitoring at 308 operating stations in 115 cities/towns in 25 states and 4 Union Territories of India. The monitoring of meteorological parameters such as wind speed and direction, relative humidity and temperature has also been integrated with the monitoring of air quality. The monitoring of these pollutants is carried out for 24 hours (4-hourly sampling for gaseous pollutants and 8-hourly sampling for particulate matter) with a frequency of twice a week, to yield 104 observations in a year. The key findings of India's central pollution control board are: Most Indian cities continue to violate India's and world air quality PM10 targets. Respirable particulate matter pollution remains a key challenge for India. Despite the general non-attainment, some cities showed far more improvement than others. A decreasing trend has been observed in PM10 levels in cities like Solapur and Ahmedabad over the last few years. This improvement may be due to local measures taken to reduce Sulphur in diesel and stringent enforcement by the government. A decreasing trend has been observed in Sulphur dioxide levels in residential areas of many cities such as Delhi, Mumbai, Lucknow, Bhopal during last few years. The decreasing trend in Sulphur dioxide levels may be due to recently introduced clean fuel standards, and the increasing use of LPG as domestic fuel instead of coal or fuelwood, and the use of CNG instead of diesel in certain vehicles. A decreasing trend has been observed in nitrogen dioxide levels in residential areas of some cities such as Bhopal and Solapur during last few years. Most Indian cities greatly exceed acceptable levels of suspended particulate matter. This may be because of refuse and biomass burning, vehicles, power plant emissions, industrial sources. The Indian air quality monitoring stations reported lower levels of PM10 and suspended particulate matter during monsoon months possibly due to wet deposition and air scrubbing by rainfall. Higher levels of particulates were observed during winter months possibly due to lower mixing heights and more calm conditions. In other words, India's air quality worsens in winter months, and improves with the onset of monsoon season. The average annual SOx and NOx emissions level and periodic violations in industrial areas of India were significantly and surprisingly lower than the emission and violations in residential areas of India Of the four major Indian cities, air pollution was consistently worse in Delhi, every year over 5-year period (2004–2018). Kolkata was a close second, followed by Mumbai. Chennai air pollution was least of the four. Steps taken The government in Delhi launched an Odd-Even Rule in November, 2017 which is based on the Odd-Even rationing method: This meant that cars running with number plates ending in Odd digits could only be driven on certain days of the week, while the Even digit cars could be driven on the remaining days of the week. Local governments of various states also implemented measures such as tighter vehicle emissions' norms, higher penalties for burning rubbish and better control of road dust. The Indian government has committed to a 50% reduction in households using solid fuel for cooking Some goals set for future are: Clean up the transportation sector by introducing 1,000 electric public transport buses to its existing 550 busses. Upgrade all fossil fuel combustion engine vehicles to BS6 emission standards Meet a goal of 25% of private vehicles to be electricity powered by 2023 Renewable energy in all power plants Provide farmers with a machine called a Happy Seeder which converts agricultural residue to fertilizer Encourage crop diversification to farmers and grow sustainable water-conserving crops such as coarse grains and pulses. Analyze health data and study the efficiency of different room filtration systems in areas where indoor air pollution is highest. Identify effective ways to inform the public about air pollution data Launch new citizen science programs to better document exposures Reduce Carbon Emissions: "According to Inter-governmental Panel on Climate Change, to limit warming well below 2 degree Celsius, CO2 emissions should decline by about 20 per cent by 2030 and reach net zero around 2075; to limit warming below 1.5 degree Celsius, CO2 emissions should decline by 50 per cent by 2030 and reach net zero by around 2050..." Improve air quality monitoring by deploying more stations and utilizing IoT-based mobile and drive-by sensing approaches. See also Air pollution in Delhi List of Kerala cities by ambient air quality Hydrogen internal combustion engine auto rickshaw Air pollution measurement BioDME: low-pollution fuel for diesel generators Steam reforming of natural gas with methane pyrolysis: CO2-neutral hydrogen production from natural gas Petroleum coke List of most-polluted cities List of least-polluted cities Criteria air pollutants References Further reading Sengupta, Ramprasad; Mandal, Air Pollution : Cost Benefit Analysis of Fuel Quality Upgradation for Indian Cities. (PDF) http://www.nipfp.org.in/media/medialibrary/2013/04/wp05_nipfp_039.pdf. Retrieved 22 July 2014. {{cite web}}: Missing or empty |title= (help) Cropper, Maureen (June 2012). "The Health Effects of Coal Electricity Generation in India" (PDF). Retrieved 22 July 2014. == External links ==
data center
A data center (American English) or data centre (Commonwealth English) is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems.Since IT operations are crucial for business continuity, it generally includes redundant or backup components and infrastructure for power supply, data communication connections, environmental controls (e.g., air conditioning, fire suppression), and various security devices. A large data center is an industrial-scale operation using as much electricity as a small town. Estimated global data center electricity consumption in 2022 was 240-340 TWh, or roughly 1-1.3% of global electricity demand. This excludes energy used for cryptocurrency mining, which was estimated to be around 110 TWh in 2022, or another 0.4% of global electricity demand.Data centers can vary widely in terms of size, power requirements, redundancy, and overall structure. Four common categories used to segment types of data centers are onsite data centers, colocation facilities, hyperscale data centers, and edge data centers. History Data centers have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center. Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised. During the boom of the microcomputer industry, and especially during the 1980s, users started to deploy computers everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, organizations grew aware of the need to control IT resources. The availability of inexpensive networking equipment, coupled with new standards for the network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term data center, as applied to specially designed computer rooms, started to gain popular recognition about this time.The boom of data centers came during the dot-com bubble of 1997–2000. Companies needed fast Internet connectivity and non-stop operation to deploy systems and to establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called internet data centers (IDCs), which provide enhanced capabilities, such as crossover backup: "If a Bell Atlantic line is cut, we can transfer them to ... to minimize the time of outage."The term cloud data centers (CDCs) has been used. Data centers typically cost a lot to build and maintain. Increasingly, the division of these terms has almost disappeared and they are being integrated into the term data center. Requirements for modern data centers Modernization and data center transformation enhances performance and energy efficiency.Information security is also a concern, and for this reason, a data center has to offer a secure environment that minimizes the chances of a security breach. A data center must, therefore, keep high standards for assuring the integrity and functionality of its hosted computer environment. Industry research company International Data Corporation (IDC) puts the average age of a data center at nine years old. Gartner, another research company, says data centers older than seven years are obsolete. The growth in data (163 zettabytes by 2025) is one factor driving the need for data centers to modernize. Focus on modernization is not new: concern about obsolete equipment was decried in 2007, and in 2011 Uptime Institute was concerned about the age of the equipment therein. By 2018 concern had shifted once again, this time to the age of the staff: "data center staff are aging faster than the equipment." Meeting standards for data centers The Telecommunications Industry Association's Telecommunications Infrastructure Standard for Data Centers specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center.Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces, provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to: Operate and manage a carrier's telecommunication network Provide data center based applications directly to the carrier's customers Provide hosted applications for a third party to provide services to their customers Provide a combination of these and similar data center applications Data center transformation Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach. The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, automation and security. Standardization/consolidation: Reducing the number of data centers and avoiding server sprawl (both physical and virtual) often includes replacing aging data center equipment, and is aided by standardization. Virtualization: Lowers capital and operational expenses, reduces energy consumption. Virtualized desktops can be hosted in data centers and rented out on a subscription basis. Investment bank Lazard Capital Markets estimated in 2008 that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization. Automating: Automating tasks such as provisioning, configuration, patching, release management, and compliance is needed, not just when facing fewer skilled IT workers. Securing: Protection of virtual systems is integrated with the existing security of physical infrastructures. Raised floor A raised floor standards guide named GR-2930 was developed by Telcordia Technologies, a subsidiary of Ericsson.Although the first raised floor computer room was made by IBM in 1956, and they've "been around since the 1960s", it was the 1970s that made it more common for computer centers to thereby allow cool air to circulate more efficiently.The first purpose of the raised floor was to allow access for wiring. Lights out The lights-out data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting. All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure. Noise levels Generally speaking, local authorities prefer noise levels at data centers to be "10 dB below the existing night-time background noise level at the nearest residence."OSHA regulations require monitoring of noise levels inside data centers if noise exceeds 85 decibels. The average noise level in server areas of a data center may reach as high as 92-96 dB(A).Residents living near data centers have described the sound as "a high-pitched whirring noise 24/7", saying “It’s like being on a tarmac with an airplane engine running constantly ... Except that the airplane keeps idling and never leaves.”External sources of noise include HVAC equipment and energy generators. Data center design The field of data center design has been growing for decades in various directions, including new construction big and small along with the creative re-use of existing facilities, like abandoned retail space, old salt mines and war-era bunkers. a 65-story data center has already been proposed the number of data centers as of 2016 had grown beyond 3 million USA-wide, and more than triple that number worldwideLocal building codes may govern the minimum ceiling heights and other parameters. Some of the considerations in the design of data centers are: Size - one room of a building, one or more floors, or an entire building, Capacity - can hold up to or past 1,000 servers Other considerations - Space, power, cooling, and costs in the data center. Mechanical engineering infrastructure - heating, ventilation and air conditioning (HVAC); humidification and dehumidification equipment; pressurization. Electrical engineering infrastructure design - utility service planning; distribution, switching and bypass from power sources; uninterruptible power source (UPS) systems; and more. Design criteria and trade-offs Availability expectations: The costs of avoiding downtime should not exceed the cost of the downtime itself Site selection: Location factors include proximity to power grids, telecommunications infrastructure, networking services, transportation lines and emergency services. Other considerations should include flight paths, neighboring power drains, geological risks, and climate (associated with cooling costs).Often, power availability is the hardest to change. High availability Various metrics exist for measuring the data-availability that results from data-center availability beyond 95% uptime, with the top of the scale counting how many nines can be placed after 99%. Modularity and flexibility Modularity and flexibility are key elements in allowing for a data center to grow and change over time. Data center modules are pre-engineered, standardized building blocks that can be easily configured and moved as needed.A modular data center may consist of data center equipment contained within shipping containers or similar portable containers. Components of the data center can be prefabricated and standardized which facilitates moving if needed. Environmental control Temperature and humidity are controlled via: Air conditioning indirect cooling, such as using outside air, Indirect Evaporative Cooling (IDEC) units, and also using sea water.It is important that computers do not get humid or overheat, as high humidity can lead to dust clogging the fans, which leads to overheat, or can cause components to malfunction, ruining the board and running a fire hazard. Overheat can cause components, usually the silicon or copper of the wires or circuits to melt, causing connections to loosen, causing fire hazards. Electrical power Backup power consists of one or more uninterruptible power supplies, battery banks, and/or diesel / gas turbine generators.To prevent single points of failure, all elements of the electrical systems, including backup systems, are typically given redundant copies, and critical servers are connected to both the A-side and B-side power feeds. This arrangement is often made to achieve N+1 redundancy in the systems. Static transfer switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure. Low-voltage cable routing Options include: Data cabling can be routed through overhead cable trays Raised floor cabling, both for security reasons and to avoid the extra cost of cooling systems over the racks. Smaller/less expensive data centers may use anti-static tiles instead for a flooring surface. Air flow Air flow management addresses the need to improve data center computer cooling efficiency by preventing the recirculation of hot air exhausted from IT equipment and reducing bypass airflow. There are several methods of separating hot and cold airstreams, such as hot/cold aisle containment and in-row cooling units. Aisle containment Cold aisle containment is done by exposing the rear of equipment racks, while the fronts of the servers are enclosed with doors and covers. This is similar to how large-scale food companies refrigerate and store their products.Computer cabinets/Server farms are often organized for containment of hot/cold aisles. Proper air duct placement prevents the cold and hot air from mixing. Rows of cabinets are paired to face each other so that the cool and hot air intakes and exhausts don't mix air, which would severely reduce cooling efficiency. Alternatively, a range of underfloor panels can create efficient cold air pathways directed to the raised-floor vented tiles. Either the cold aisle or the hot aisle can be contained.Another option is fitting cabinets with vertical exhaust duct chimneys. Hot exhaust pipes/vents/ducts can direct the air into a Plenum space above a Dropped ceiling and back to the cooling units or to outside vents. With this configuration, traditional hot/cold aisle configuration is not a requirement. Fire protection Data centers feature fire protection systems, including passive and Active Design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a fire at its incipient stage. Although the main room usually does not allow Wet Pipe-based Systems due to the fragile nature of Circuit-boards, there still exist systems that can be used in the rest of the facility or in cold/hot aisle air circulation systems that are closed systems, such as: Sprinkler systems Misting, using high pressure to create extremely small water droplets, which can be used in sensitive rooms due to the nature of the droplets.However, there also exist other means to put out fires, especially in Sensitive areas, usually using Gaseous fire suppression, of which Halon gas was the most popular, until the negative effects of producing and using it were discovered.[1] Security Physical access is usually restricted. Layered security often starts with fencing, bollards and mantraps. Video camera surveillance and permanent security guards are almost always present if the data center is large or contains sensitive information. Fingerprint recognition mantraps are starting to be commonplace. Logging access is required by some data protection regulations; some organizations tightly link this to access control systems. Multiple log entries can occur at the main entrance, entrances to internal rooms, and at equipment cabinets. Access control at cabinets can be integrated with intelligent power distribution units, so that locks are networked through the same appliance. Energy use Energy use is a central issue for data centers. Power draw ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building. For higher power density facilities, electricity costs are a dominant operating expense and account for over 10% of the total cost of ownership (TCO) of a data center. Greenhouse gas emissions In 2020, data centers (excluding cryptocurrency mining) and data transmission each used about 1% of world electricity. Although some of this electricity was low carbon, the IEA called for more "government and industry efforts on energy efficiency, renewables procurement and RD&D", as some data centers still use electricity generated by fossil fuels. They also said that lifecycle emissions should be considered, that is including embodied emissions, such as in buildings. Data centers are estimated to have been responsible for 0.5% of US greenhouse gas emissions in 2018. Some Chinese companies, such as Tencent, have pledged to be carbon neutral by 2030, while others such as Alibaba have been criticized by Greenpeace for not committing to become carbon neutral. Energy efficiency and overhead The most commonly used energy efficiency metric for data centers is power usage effectiveness (PUE), calculated as the ratio of total power entering the data center divided by the power used by IT equipment. P U E = Total Facility Power IT Equipment Power {\displaystyle \mathrm {PUE} ={{\mbox{Total Facility Power}} \over {\mbox{IT Equipment Power}}}} PUE measures the percentage of power used by overhead devices (cooling, lighting, etc.). The average USA data center has a PUE of 2.0, meaning two watts of total power (overhead + IT equipment) for every watt delivered to IT equipment. State-of-the-art data centers are estimated to have a PUE of roughly 1.2. Google publishes quarterly efficiency metrics from its data centers in operation.The U.S. Environmental Protection Agency has an Energy Star rating for standalone or large data centers. To qualify for the ecolabel, a data center must be within the top quartile in energy efficiency of all reported facilities. The Energy Efficiency Improvement Act of 2015 (United States) requires federal facilities — including data centers — to operate more efficiently. California's Title 24 (2014) of the California Code of Regulations mandates that every newly constructed data center must have some form of airflow containment in place to optimize energy efficiency. The European Union also has a similar initiative: EU Code of Conduct for Data Centres. Energy use analysis and projects The focus of measuring and analyzing energy use goes beyond what is used by IT equipment; facility support hardware such as chillers and fans also use energy.In 2011, server racks in data centers were designed for more than 25 kW and the typical server was estimated to waste about 30% of the electricity it consumed. The energy demand for information storage systems is also rising. A high-availability data center is estimated to have a 1 megawatt (MW) demand and consume $20,000,000 in electricity over its lifetime, with cooling representing 35% to 45% of the data center's total cost of ownership. Calculations show that in two years, the cost of powering and cooling a server could be equal to the cost of purchasing the server hardware. Research in 2018 has shown that a substantial amount of energy could still be conserved by optimizing IT refresh rates and increasing server utilization.In 2011, Facebook, Rackspace and others founded the Open Compute Project (OCP) to develop and publish open standards for greener data center computing technologies. As part of the project, Facebook published the designs of its server, which it had built for its first dedicated data center in Prineville. Making servers taller left space for more effective heat sinks and enabled the use of fans that moved more air with less energy. By not buying commercial off-the-shelf servers, energy consumption due to unnecessary expansion slots on the motherboard and unneeded components, such as a graphics card, was also saved. In 2016, Google joined the project and published the designs of its 48V DC shallow data center rack. This design had long been part of Google data centers. By eliminating the multiple transformers usually deployed in data centers, Google had achieved a 30% increase in energy efficiency. In 2017, sales for data center hardware built to OCP designs topped $1.2 billion and are expected to reach $6 billion by 2021. Power and cooling analysis Power is the largest recurring cost to the user of a data center. Cooling it at or below 70 °F (21 °C) wastes money and energy. Furthermore, overcooling equipment in environments with a high relative humidity can expose equipment to a high amount of moisture that facilitates the growth of salt deposits on conductive filaments in the circuitry.A power and cooling analysis, also referred to as a thermal assessment, measures the relative temperatures in specific areas as well as the capacity of the cooling systems to handle specific ambient temperatures. A power and cooling analysis can help to identify hot spots, over-cooled areas that can handle greater power use density, the breakpoint of equipment loading, the effectiveness of a raised-floor strategy, and optimal equipment positioning (such as AC units) to balance temperatures across the data center. Power cooling density is a measure of how much square footage the center can cool at maximum capacity. The cooling of data centers is the second largest power consumer after servers. The cooling energy varies from 10% of the total energy consumption in the most efficient data centers and goes up to 45% in standard air-cooled data centers. Energy efficiency analysis An energy efficiency analysis measures the energy use of data center IT and facilities equipment. A typical energy efficiency analysis measures factors such as a data center's Power Use Effectiveness (PUE) against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air-management metrics. However, the limitation of most current metrics and approaches is that they do not include IT in the analysis. Case studies have shown that by addressing energy efficiency holistically in a data center, major efficiencies can be achieved that are not possible otherwise. Computational Fluid Dynamics (CFD) analysis This type of analysis uses sophisticated tools and techniques to understand the unique thermal conditions present in each data center—predicting the temperature, airflow, and pressure behavior of a data center to assess performance and energy consumption, using numerical modeling. By predicting the effects of these environmental conditions, CFD analysis of a data center can be used to predict the impact of high-density racks mixed with low-density racks and the onward impact on cooling resources, poor infrastructure management practices, and AC failure or AC shutdown for scheduled maintenance. Thermal zone mapping Thermal zone mapping uses sensors and computer modeling to create a three-dimensional image of the hot and cool zones in a data center.This information can help to identify optimal positioning of data center equipment. For example, critical servers might be placed in a cool zone that is serviced by redundant AC units. Green data centers Data centers use a lot of power, consumed by two main usages: The power required to run the actual equipment and then the power required to cool the equipment. Power efficiency reduces the first category. Cooling cost reduction through natural means includes location decisions: When the focus is avoiding good fiber connectivity, power grid connections, and people concentrations to manage the equipment, a data center can be miles away from the users. Mass data centers like Google or Facebook don't need to be near population centers. Arctic locations that can use outside air, which provides cooling, are becoming more popular.Renewable electricity sources are another plus. Thus countries with favorable conditions, such as Canada, Finland, Sweden, Norway, and Switzerland are trying to attract cloud computing data centers. Energy reuse It is very difficult to reuse the heat which comes from air-cooled data centers. For this reason, data center infrastructures are more often equipped with heat pumps. An alternative to heat pumps is the adoption of liquid cooling throughout a data center. Different liquid cooling techniques are mixed and matched to allow for a fully liquid-cooled infrastructure that captures all heat with water. Different liquid technologies are categorized in 3 main groups, indirect liquid cooling (water-cooled racks), direct liquid cooling (direct-to-chip cooling) and total liquid cooling (complete immersion in liquid, see server immersion cooling). This combination of technologies allows the creation of a thermal cascade as part of temperature chaining scenarios to create high-temperature water outputs from the data center. Dynamic infrastructure Dynamic infrastructure provides the ability to intelligently, automatically and securely move workloads within a data center anytime, anywhere, for migrations, provisioning, to enhance performance, or building co-location facilities. It also facilitates performing routine maintenance on either physical or virtual systems all while minimizing interruption. A related concept is Composable Infrastructure, which allows for the dynamic reconfiguration of the available resources to suit needs, only when needed.Side benefits include reducing cost facilitating business continuity and high availability enabling cloud and grid computing. Network infrastructure Communications in data centers today are most often based on networks running the Internet protocol suite. Data centers contain a set of routers and switches that transport traffic between the servers and to the outside world which are connected according to the data center network architecture. Redundancy of the internet connection is often provided by using two or more upstream service providers (see Multihoming). Some of the servers at the data center are used for running the basic internet and intranet services needed by internal users in the organization, e.g., e-mail servers, proxy servers, and DNS servers. Network security elements are also usually deployed: firewalls, VPN gateways, intrusion detection systems, and so on. Also common are monitoring systems for the network and some of the applications. Additional off-site monitoring systems are also typical, in case of a failure of communications inside the data center. Software/data backup Non-mutually exclusive options for data backup are: Onsite OffsiteOnsite is traditional, and one of its major advantages is immediate availability. Offsite backup storage Data backup techniques include having an encrypted copy of the data offsite. Methods used for transporting data are: Having the customer write the data to a physical medium, such as magnetic tape, and then transporting the tape elsewhere. Directly transferring the data to another site during the backup, using appropriate links. Uploading the data "into the cloud". Modular data center For quick deployment or disaster recovery, several large hardware vendors have developed mobile/modular solutions that can be installed and made operational in a very short amount of time. Micro data center Micro Data Centers (MDCs) are access-level data centers which are smaller in size than traditional data centers but provide the same features. They are typically located near the data source to reduce communication delays, as their small size allows several MDCs to be spread out over a wide area. MDCs are well suited to user-facing, front end applications. They are commonly used in edge computing and other areas where low latency data processing is needed. See also Notes References External links Lawrence Berkeley Lab - Research, development, demonstration, and deployment of energy-efficient technologies and practices for data centers DC Power For Data Centers Of The Future - FAQ: 380VDC testing and demonstration at a Sun data center. White Paper - Property Taxes: The New Challenge for Data Centers The European Commission H2020 EURECA Data Centre Project - Data centre energy efficiency guidelines, extensive online training material, case studies/lectures (under events page), and tools.
boeing
The Boeing Company () is an American multinational corporation that designs, manufactures, and sells airplanes, rotorcraft, rockets, satellites, telecommunications equipment, and missiles worldwide. The company also provides leasing and product support services. Boeing is among the largest global aerospace manufacturers; it is the third-largest defense contractor in the world based on 2020 revenue and is the largest exporter in the United States by dollar value. Boeing's stock is a component of the Dow Jones Industrial Average. Boeing was founded by William Boeing in Seattle, Washington, on July 15, 1916. The present corporation is the result of the merger of Boeing with McDonnell Douglas on August 1, 1997. Then–chairman and CEO of Boeing, Philip M. Condit, assumed those roles in the combined company, while Harry Stonecipher, former CEO of McDonnell Douglas, became president and COO.As of 2023, the Boeing Company's corporate headquarters is located in the Crystal City neighborhood of Arlington, Virginia. The company is organized into four primary divisions: Boeing Commercial Airplanes (BCA); Boeing Defense, Space & Security (BDS); Boeing Global Services; and Boeing Capital. In 2021, Boeing recorded $62.3 billion in sales. Boeing is ranked 54th on the Fortune magazine "Fortune 500" list (2020), and ranked 121st on the "Fortune Global 500" list (2020). History The Boeing Company was started in 1916, when American lumber industrialist William E. Boeing founded Pacific Aero Products Company in Seattle, Washington. Shortly before doing so, he and Conrad Westervelt created the "B&W" seaplane. In 1917, the organization was renamed Boeing Airplane Company, with William Boeing forming Boeing Airplane & Transport Corporation in 1928. In 1929, the company was renamed United Aircraft and Transport Corporation, followed by the acquisition of several aircraft makers such as Avion, Chance Vought, Sikorsky Aviation, Stearman Aircraft, Pratt & Whitney, and Hamilton Metalplane.In 1931, the group merged its four smaller airlines into United Airlines. In 1934, aircraft manufacturing was required to be separate from air transportation. Therefore, Boeing Airplane Company became one of three major groups to arise from the dissolution of United Aircraft and Transport; the other two entities were United Aircraft (later United Technologies) and United Airlines.In 1960, the company bought Vertol Aircraft Corporation, which at the time, was the biggest independent manufacturer of helicopters. During the 1960s and 1970s, the company diversified into industries such as outer space travel, marine craft, agriculture, energy production and transit systems.In 1995, Boeing partnered with Russian, Ukrainian, and Anglo-Norwegian organizations to create Sea Launch, a company providing commercial launch services sending satellites to geostationary orbit from floating platforms. In 2000, Boeing acquired the satellite segment of Hughes Electronics.In December 1996, Boeing announced its intention to merge with McDonnell Douglas, which, following regulatory approval, was completed on August 4, 1997. The delay was caused by objections from the European Commission, which ultimately placed three conditions on the merger: exclusivity agreements with three US airlines would be terminated, separate accounts would be maintained for the McDonnell-Douglas civil aircraft business, and some defense patents were to be made available to competitors. In 2020, Quartz reported that after the merger there was a "clash of corporate cultures, where Boeing's engineers and McDonnell Douglas's bean-counters went head-to-head", which the latter won, and that this may have contributed to the events leading up to the 737 Max crash crisis.Boeing's corporate headquarters moved from Seattle to Chicago in 2001. In 2018, the company opened its first factory in Europe at Sheffield, UK, reinforced by a research partnership with the University of Sheffield.In May 2020, the company cut over 12,000 jobs due to the drop in air travel during the COVID-19 pandemic with plans for a total 10% cut of its workforce or approximately 16,000 positions. In July 2020, Boeing reported a loss of $2.4 billion as a result of the pandemic and the grounding of its 737 MAX aircraft, and that it was in response planning to make more job and production cuts. On August 18, 2020, CEO Dave Calhoun announced further job cuts; on October 28, 2020, nearly 30,000 employees were laid off, as the airplane manufacturer was increasingly losing money due to the COVID-19 pandemic.The Boeing 777X, the largest capacity twinjet, made its maiden flight on January 25, 2020. Following an incident during flight testing, the estimated first delivery of the aircraft was delayed until 2024.After two fatal crashes of the Boeing 737 MAX narrow-body passenger airplanes in 2018 and 2019, aviation regulators and airlines around the world grounded all 737 MAX airliners. A total of 387 aircraft were grounded. Boeing's reputation, business, and financial rating has suffered after these groundings, questioning Boeing's strategy, governance, and focus on profits and cost efficiency. In September 2022, Boeing was ordered to pay $200m over charges of misleading investors about safety issues related to these crashes. In March 2023, Boeing disputed in court filings that the victims of Ethiopian Airlines Flight 302 experienced any pain and suffering leading up to the crash.In May 2022, Boeing announced plans to move its global headquarters from Chicago to Arlington, Virginia, a suburb of Washington, D.C. The company said that this decision was made in part due to the region's "proximity to our customers and stakeholders, and its access to world-class engineering and technical talent."In February 2023, Boeing announced plans for laying off approximately 2,000 of its workers from finances and human resources.In May 2023, Boeing acquired autonomous eVTOL air taxi startup Wisk Aero. Divisions The corporation's four main divisions are Boeing Commercial Airplanes (BCA), Boeing Defense, Space & Security (BDS), Boeing Global Services, and Boeing Capital.Boeing Commercial Airplanes (BCA) builds commercial aircraft including the 737, 747, 767, 777, and 787 along with freighter and business jet variants of most. The division employs nearly 35,000 people, many working at the company's manufacturing facilities in Everett and Renton, Washington (outside of Seattle), and South Carolina. Boeing Defense, Space & Security (BDS) builds military aircraft, satellites, spacecraft, and space launch vehicles. Boeing Global Services provides aftermarket support, such as maintenance and upgrades, to customers who purchase equipment from BCA, BDS, or from other manufacturers. Boeing Capital provides customers financing for the products and services from the company's other divisions. Environmental record In 2006, the UCLA Center for Environmental Risk Reduction released a study showing that Boeing's Santa Susana Field Laboratory, a site that was a former Rocketdyne test and development site in the Simi Hills of eastern Ventura County in Southern California, had been contaminated by Rocketdyne with toxic and radioactive waste. Boeing agreed to a cleanup agreement with the EPA in 2017. Clean-up studies and lawsuits are in progress.On July 19, 2022, Boeing announced a renewed partnership with Mitsubishi to innovate carbon-neutral and sustainable solutions. Jet biofuels The airline industry is responsible for about 11% of greenhouse gases emitted by the U.S. transportation sector. Aviation's share of the greenhouse gas emissions was poised to grow, as air travel increases and ground vehicles use more alternative fuels like ethanol and biodiesel. Boeing estimates that biofuels could reduce flight-related greenhouse-gas emissions by 60 to 80%. The solution blends algae fuels with existing jet fuel.Boeing executives said the company was collaborating with Brazilian biofuels maker Tecbio, Aquaflow Bionomic of New Zealand, and other fuel developers around the world. As of 2007, Boeing had tested six fuels from these companies, and expected to test 20 fuels "by the time we're done evaluating them". Boeing also joined other aviation-related members in the Algal Biomass Organization (ABO) in June 2008.Air New Zealand and Boeing are researching the jatropha plant to see if it is a sustainable alternative to conventional fuel. A two-hour test flight using a 50–50 mixture of the new biofuel with Jet A-1 in a Rolls-Royce RB-211 engine of a 747-400 was completed on December 30, 2008. The engine was then removed to be studied to identify any differences between the Jatropha blend and regular Jet A1. No effects on performances were found.On August 31, 2010, Boeing worked with the U.S. Air Force to test the Boeing C-17 running on 50% JP-8, 25% Hydro-treated Renewable Jet fuel, and 25% of Fischer–Tropsch fuel with successful results. Electric propulsion For NASA's N+3 future airliner program, Boeing has determined that hybrid electric engine technology is by far the best choice for its subsonic design. Hybrid electric propulsion has the potential to shorten takeoff distance and reduce noise. Boeing created a team to study electric propulsion in future generation of subsonic commercial aircraft. SUGAR for Subsonic Ultra Green Aircraft Research includes BR&T, Boeing Commercial Airplanes, General Electric, and Georgia Tech. There are five main concepts the team is reviewing. SUGAR-Free and Refined SUGAR, are two concepts based on conventional aircraft similar to the 737. SUGAR High and SUGAR Volt, are both high-span, strut-based wing concepts. The final concept is SUGAR Ray, which is a wing-body hybrid. The SUGAR Volt concept has resulted in a drop in fuel burn by more than 70 percent and a reduction of total energy use by 55 percent. This reduction is the result of adding an electric battery gas turbine hybrid propulsion system. Political contributions, federal contracts, advocacy In 2008 and 2009, Boeing was second on the list of Top 100 US Federal Contractors, with contracts totaling US$22 billion and US$23 billion respectively. Between 1995 and early 2021, the company agreed to pay US$4.3 billion to settle 84 instances of misconduct, including US$615 million in 2006 in relation to illegal hiring of government officials and improper use of proprietary information.Boeing secured the highest-ever tax breaks at the state level in 2013.Boeing's spent US$16.9 million on lobbying expenditures in 2009. In the 2008 presidential election, Barack Obama "was by far the biggest recipient of campaign contributions from Boeing employees and executives, hauling in US$197,000 – five times as much as John McCain, and more than the top eight Republicans combined".Boeing has a corporate citizenship program centered on charitable contributions in five areas: education, health, human services, environment, the arts, culture, and civic engagement. In 2011, Boeing spent US$147.3 million in these areas through charitable grants and business sponsorships. In February 2012, Boeing Global Corporate Citizenship partnered with the Insight Labs to develop a new model for foundations to more effectively lead the sectors they serve.The company is a member of the U.S. Global Leadership Coalition, a Washington D.C.-based coalition of more than 400 major companies and NGOs that advocate a larger International Affairs Budget, which funds American diplomatic and development efforts abroad. A series of U.S. diplomatic cables show how U.S. diplomats and senior politicians intervene on behalf of Boeing to help boost the company's sales.In 2007 and 2008, the company benefited from over US$10 billion of long-term loan guarantees, helping finance the purchase of their commercial aircraft in countries including Brazil, Canada, Ireland, and the United Arab Emirates, from the Export-Import Bank of the United States, some 65% of the total loan guarantees the bank made in the period. Criticism In December 2011, the non-partisan organization Public Campaign criticized Boeing for spending US$52.29 million on lobbying and not paying taxes during 2008–2010, instead getting US$178 million in tax rebates, despite making a profit of US$9.7 billion, laying off 14,862 workers since 2008, and increasing executive pay by 31% to US$41.9 million in 2010 for its top five executives.The firm has also been criticized for supplying and profiting from wars, including the war in Yemen where its missiles were found to be used for indiscriminate attacks, killing many civilians.Boeing has been accused of unethical practices (in violation of the Procurement Integrity Act) while attempting to submit a revised bid to NASA for their lunar landing project. Financials For the fiscal year 2017, Boeing reported earnings of US$8.191 billion, with annual revenue of US$93.392 billion, a 1.25% decline over the previous fiscal cycle. Boeing's shares traded at over $209 per share, and its market capitalization was valued at over US$206.6 billion. Between 2010 and 2018, Boeing increased its operating cash flow from $3 to $15.3 billion, sustaining its share price, by negotiating advance payments from customers and delaying payments to its suppliers. This strategy is sustainable only as long as orders are good and delivery rates are increasing.From 2013 to 2019, Boeing spent over $60 billion on dividends and stock buybacks, twice as much as the development costs of the 787.In 2020, Boeing's second quarter revenue was $11.8 billion as a result of the pandemic slump. Due to higher sales in other divisions and an influx in deliveries of commercial jetliners in 2021, second quarter revenue increased by 44%, reaching nearly $17 billion. Employment numbers The company's employment totals are listed below. Approximately 1.5% of Boeing employees are in the Technical Fellowship program, a program through which Boeing's top engineers and scientists set technical direction for the company. The average salary at Boeing is $76,784, reported by former employees. Corporate governance In 2022, Rory Kennedy made a documentary film, Downfall: The Case Against Boeing, streamed by Netflix. She said about the 21st-century history of Boeing "There were many decades when Boeing did extraordinary things by focusing on excellence and safety and ingenuity. Those three virtues were seen as the key to profit. It could work, and beautifully. And then they were taken over by a group that decided Wall Street was the end-all, be-all."On May 5, 2022, Boeing announced that it would be moving its headquarters from Chicago to Arlington, Virginia in the Washington, D.C. metropolitan area. Additionally, it plans to add a research and technology center in Northern Virginia. Board As of 2022, Boeing is headed by a President who also serves as the chief executive officer. The roles of chairman of the board and CEO were separated in October 2019. Past leadership See also Boeing Everett Factory - main production facility for commercial widebody aircraft Competition between Airbus and Boeing Future of Flight Aviation Center & Boeing Tour – Corporate public museum United Aircraft Corporation United States Air Force Plant 42 References Further reading Cloud, Dana L. We Are the Union: Democratic Unionism and Dissent at Boeing. Urbana, IL: University of Illinois Press, 2011. OCLC 816419078 Greider, William. One World, Ready or Not: The Manic Logic of Global Capitalism. London: Penguin Press, 1998. OCLC 470412225 Reed, Polly. Capitalist Family Values: Gender, Work, and Corporate Culture at Boeing. Lincoln, NE: University of Nebraska Press, 2015. OCLC 931949091 Sell, Terry M . Wings of Power: Boeing and the Politics of Growth in the Northwest (U of Washington Press, 2015) ISBN 9780295996257 External links Official website Business data for Boeing Co: "Annual Reports Collection". University of Washington. 1948–1984.
united states vehicle emission standards
United States vehicle emission standards are set through a combination of legislative mandates enacted by Congress through Clean Air Act (CAA) amendments from 1970 onwards, and executive regulations managed nationally by the Environmental Protection Agency (EPA), and more recently along with the National Highway Traffic Safety Administration (NHTSA). These standard cover common motor vehicle air pollution, including carbon monoxide, nitrogen oxides, and particulate emissions, and newer versions have incorporated fuel economy standards. In nearly all cases, these agencies set standards that are expected to be met on a fleet-wide basis from automobile and other vehicle manufacturers, with states delegated to enforce those standards but not allowed to set stricter requirements. California has generally been the exception, having been granted a waiver and given allowance to set stricter standards as it had established its own via the California Air Resources Board prior to the 1970 CAA amendments. Several other states have since also received waivers to follow California's standards, which have also become a de facto standard for vehicle manufacturers to follow. Vehicle emission standards have generally been points of debate between the government, vehicle manufacturers, and environmental groups, and has become a point of political debate. Legislative and regulation history Clean Air Act of 1963 (CAA) The Clean Air Act of 1963 (CAA) was passed as an extension of the Air Pollution Control Act of 1955, encouraging the federal government via the United States Public Health Service under the then-Department of Health, Education, and Welfare (HEW) to encourage research and development towards reducing pollution and working with states to establish their own emission reduction programs. The CAA was amended in 1965 with the Motor Vehicle Air Pollution Control Act (MVAPCA) which gave the HEW Secretary authority to set federal standards for vehicle emissions as early as 1967. California Air Resources Board (CARB) In the mid-20th century, California's economy grew rapidly after the Great Depression, but this economic development was accompanied by an increase in air pollution in the state. As a result, smog started to form in the valleys of Southern California, causing respiratory problems for humans and damaging crops.In the 1960s, Dutch chemist Arie Jan Haagen-Smit identified the air pollutants responsible for the smog: carbon monoxide, hydrocarbons, and nitrogen oxides emitted from cars and factories through inefficient fuel combustion. Haagen-Smit also discovered that these air pollutants react with sunlight to form ozone, a major component of smog. As a response to this situation, the California Air Resources Board (CARB) was established in 1967 with Haagen-Smit as its first chairman. CARB set stringent vehicle emission standards to reduce air pollution in the state. California established the California Air Resources Board (CARB) in 1967, with Haagen-Smit as its first chairman, which among other activities set stringent vehicle emission standards by that year.Other states were also facing similar air pollution issues at the same time, but fearing that setting too strict a standard would drive away automobile manufacturers, they considered implementing standards that were less restrictive compared to California, potentially creating a patchwork of regulations across the United States. The automobile industry lobbied to Congress, and the CAA was modified in 1967 with the National Emissions Standards Act (also known as the Air Quality Act) that expressly prevented states from setting more restrictive emission standards than the federal levels. However, because California has already established its program, it was granted a waiver and allowed to keep its standards. This Act did give states the authority to perform vehicle inspections programs beyond the requirements for new vehicles, though few states took their own action on this. Formation of the Environmental Protection Agency and the Clear Air Act Amendment of 1970 Air pollution had become a major national focal point by 1970, leading to a major amendment to the CAA. Near the end of 1970, the United States Environmental Protection Agency (EPA) was formed out of an executive order under President Richard Nixon with ratification by Congress to consolidate all of the environmental-related executive-branch programs to a single entity; the new agency was reflect as the primary agency for administering the CAA going forward. Among the provisions related to vehicle emissions: The 1970 Amendment required EPA to define the National Ambient Air Quality Standards (NAAQS), to be set and updated by the EPA. The NAAQS, at passage of the 1970 amendment, included the pollutants carbon monoxide, nitrogen dioxide, sulfur dioxide, particulate matter, hydrocarbons and photochemical oxidants (such as ozone). Other pollutants, such as lead, were added later based on EPA review of current conditions. Along with the other activities set by the EPA, the states were to create State Implementation Plans (SIP)s to bring their air quality within the NAAQS by 1975. The Amendment also required the EPA to define emission standards for new vehicles to help with the NAAQS emission reduction goals, including standards for fuel and testing of new vehicles to make sure these standards are met.Additionally, the 1970 CAA Amendment continued California's waiver program through which California can seek exemptions from the EPA's emissions requirements as long as theirs are at least as strict as the EPA's vehicle standards. Clear Air Act Amendment of 1977 The EPA's assessment of the state of the country meeting the target NAAQS goals by 1975 was poor, having identified numerous nonattainment areas in the country. With the 1977 Amendment to the CAA, a new deadline of December 31, 1982, for meeting the NAAQS was fixed with no allowance for extending the deadline unless specific control measures were established. Among other key provisions was the establishment of required vehicle inspection and maintenance programs (I/M) in nonattainment states and optional in other areas. This required that states establish emission testing facilities for in-use vehicles to make sure they meet emissions requirements, maintained and repaired as necessary to correct any problems before their license was renewed. The EPA was tasked to establish the basic protocols for these facilities. Other states that had met the NAAQS attainment goals could optionally establish I/M programs for existing but were required to follow the EPA's specifications. New vehicle emission standards Due to its preexisting standards and particularly severe motor vehicle air pollution problems in the Los Angeles metropolitan area, the U.S. state of California has special dispensation from the federal government to promulgate its own automobile emissions standards. Other states may choose to follow either the national standard or the stricter California standards. The states that have adopted the California standards are: Colorado, Connecticut, Delaware, Maine, Maryland, Massachusetts, New Jersey, New Mexico (2011 model year and later), New York, Nevada, Oregon, Pennsylvania, Rhode Island, Virginia, Vermont, and Washington (2009 model year and later), as well as the District of Columbia. Such states are frequently referred to as "CARB states" in automotive discussions because the regulations are defined by the California Air Resources Board. The EPA adopted the Californian fuel economy and greenhouse gas standard as a national standard by the 2016 model year and collaborated with Californian regulators on stricter national emissions standards for model years 2017–2025. Criteria pollutants Light-duty vehicles Light-duty vehicles are certified for compliance with emission standards by measuring their tailpipe emissions during rigorously-defined driving cycles that simulate a typical driving pattern. The FTP-75 city driving test (averaging about 21 miles per hour (34 km/h)) and the HWFET highway driving test (averaging about 48 miles per hour (77 km/h)) are used for measuring both emissions and fuel economy. Two sets, or tiers, of emission standards for light-duty vehicles in the United States were defined as a result of the Clean Air Act Amendments of 1990. The Tier I standard was adopted in 1991 and was phased in from 1994 to 1997. Tier II standards were phased in from 2004 to 2009. Within the Tier II ranking, there is a subranking ranging from BIN 1–10, with 1 being the cleanest (Zero Emission vehicle) and 10 being the dirtiest. The former Tier 1 standards that were effective from 1994 until 2003 were different between automobiles and light trucks (SUVs, pickup trucks, and minivans), but Tier II standards are the same for both types. These standards specifically restrict emissions of carbon monoxide (CO), oxides of nitrogen (NOx), particulate matter (PM), formaldehyde (HCHO), and non-methane organic gases (NMOG) or non-methane hydrocarbons (NMHC). The limits are defined in grams per mile (g/mi). Phase 1: 1994–1999 These standards were phased in from 1994 to 1997 and were phased out in favor of the national Tier 2 standard from 2004 to 2009. Tier I standards cover vehicles with a gross vehicular weight rating (GVWR) below 8,500 pounds (3,856 kg) and are divided into five categories: one for passenger cars, and four for light-duty trucks (which include SUVs and minivans) divided up based on the vehicle weight and cargo capacity. California's Low-emission vehicle (LEV) program defines six automotive emission standards which are stricter than the United States' national Tier regulations. Each standard has several targets depending on vehicle weight and cargo capacity; the regulations cover vehicles with test weights up to 14,000 pounds (6,400 kg). Listed in order of increasing stringency, the standards are: TLEV – Transitional low-emission vehicle LEV – Low-emission vehicle ULEV – Ultra-low-emission vehicle SULEV – Super-ultra low-emission vehicle ZEV – Zero-emission vehicleThe last category is largely restricted to electric vehicles and hydrogen cars, although such vehicles are usually not entirely non-polluting. In those cases, the other emissions are transferred to another site, such as a power plant or hydrogen reforming center, unless such sites run on renewable energy. Transitional NLEV: 1999–2003 A set of transitional and initially voluntary "national low emission vehicle" (NLEV) standards were in effect starting in 1999 for northeastern states and 2001 in the rest of the country until Tier II, adopted in 1999, began to be phased in from 2004 onwards. The National Low Emission Vehicle program covered vehicles below 6,000 pounds (2,700 kg) GVWR and adapted the national standards to accommodate California's stricter regulations. Phase 2: 2004–2009 Instead of basing emissions on vehicle weight, Tier II standards are divided into several numbered "bins". Eleven bins were initially defined, with bin 1 being the cleanest (zero-emission vehicle) and 11 the dirtiest. However, bins 9, 10, and 11 are temporary. Only the first ten bins were used for light-duty vehicles below 8,500 pounds (3,900 kg) GVWR, but medium-duty passenger vehicles up to 10,000 pounds (4,500 kg) GVWR and to all 11 bins. Manufacturers can make vehicles which fit into any of the available bins, but still must meet average targets for their entire fleets. The two least-restrictive bins for passenger cars, 9 and 10, were phased out at the end of 2006. However, bins 9 and 10 were available for classifying a restricted number of light-duty trucks until the end of 2008, when they were removed along with bin 11 for medium-duty vehicles. As of 2009, light-duty trucks must meet the same emissions standards as passenger cars. Tier II regulations also defined restrictions for the amount of sulfur allowed in gasoline and diesel fuel, since sulfur can interfere with the operation of advanced exhaust treatment systems such as selective catalytic converters and diesel particulate filters. Sulfur content in gasoline was limited to an average of 120 parts-per-million (maximum 300 ppm) in 2004, and this was reduced to an average 30 ppm (maximum 80 ppm) for 2006. Ultra-low sulfur diesel began to be restricted to a maximum 15 ppm in 2006 and refiners are to be 100% compliant with that level by 2010. A second round of California standards, known as Low Emission Vehicle II, is timed to coordinate with the Tier 2 rollout. The PZEV and AT-PZEV ratings are for vehicles which achieve a SULEV II rating and also have systems to eliminate evaporative emissions from the fuel system and which have 150,000-mile/15-year warranties on emission-control components. Several ordinary gasoline vehicles from the 2001 and later model years qualify as PZEVs. If a PZEV has technology that can also be used in ZEVs like an electric motor or high-pressure gaseous fuel tanks for compressed natural gas (CNG) or liquified petroleum gas (LPG), it qualifies as an AT-PZEV. Diesel particulate filters became a requirement in 2014; gasoline vehicles were exempt. Tier III The phase-in of new tailpipe and evaporative emission standards begin to phase-in beginning with the 2017 model year along with new fuel standards. Heavy-duty vehicles Heavy-duty vehicles must comply with more stringent exhaust emission standards and requires ultra-low sulfur diesel (ULSD) fuel (15 ppm maximum) beginning in 2007 model year. Greenhouse gases Federal emissions regulations cover the primary component of vehicle exhaust, carbon dioxide (CO2). Since CO2 emissions are proportional to the amount of fuel used, the national Corporate Average Fuel Economy regulations were historically the primary way in which automotive CO2 emissions were regulated in the U.S. The EPA faced a lawsuit seeking to compel it to regulate greenhouse gases as a pollutant, Massachusetts v. Environmental Protection Agency. As of 2007, the California Air Resources Board passed strict greenhouse gas emission standards which are being challenged in the courts.On September 12, 2007, a judge in Vermont ruled in favor of allowing states to conditionally regulate greenhouse gas (GHG) emissions from new cars and trucks, defeating an attempt by automakers to block state emissions standards. A group of automakers including General Motors, DaimlerChrysler, and the Alliance of Automobile Manufacturers had sued the state of Vermont to block rules calling for a 30 percent reduction in GHG emissions by 2016. Members of the auto industry argued that complying with these regulations would require major technological advances and raise the prices of vehicles as much as $6,000 per automobile. U.S. District Judge William K. Sessions III dismissed these claims in his ruling. "The court remains unconvinced automakers cannot meet the challenge of Vermont and California's (greenhouse gas) regulations," he wrote. Environmentalists pressed the Administration to grant California a waiver from the EPA for its emissions standards to take effect. Doing so would allow Vermont and other states to adopt these same standards under the Clean Air Act. Without such a waiver, Judge Sessions wrote, the Vermont rules will be invalid. Light-duty vehicles 2010–2016 In 2009, President Obama announced a new national fuel economy and emissions policy that incorporated California's contested plan to curb greenhouse gas emissions on its own, apart from federal government regulations. The standards are formatted such that each vehicle has an emissions target as a function of the product of its wheelbase and average track width with separate functions for passenger cars and light trucks with progressively smaller targets by model year. Thus each manufacturer has a unique standard for each model year based on the characteristics of vehicles it actually produces. The new standards established a credit trading system whereby manufacturers that overperform their annual target may sell credits to other manufacturers which they then may use to meet a credit shortfall it has from failing to meet its standards through emissions improvements. The combined fleet fuel economy for new cars and trucks with a GVWR of 10,000 pounds (4,500 kg) or less was projected to average 35.5 miles per gallon (mpg) for the 2016 model year based on the newly-established targets and projected fleet mix. The average for its cars will have to be 42 mpg, and for its trucks will be 26 mpg by 2016, in coordination with new CAFE standards. If the average fuel economy of a manufacturer's annual fleet of vehicle production falls below its defined standard, the manufacturer must pay a penalty, then US$5.50 per 0.1 mpg under the standard, multiplied by the manufacturer's total production for the U.S. domestic market. This is in addition to any gas guzzler tax, if applicable.Should CAFE targets have been extended through to 2026 under the Obama administration, it would have sought a 54 mpg industry-wide average fuel efficiency for cars and light trucks manufactured in 2026 or later, with automobile manufacturers instructed to increase the fuel economy across all of their vehicles by 5% each year. Trump-era rollback and Biden-era reversal (2017–2021) After Donald Trump was inaugurated as president in 2017, he instructed the NHTSA and EPA to rollback Obama's CAFE standards, increasing the 2026 target to a then-projected 202 g CO2/mi and requiring only an annual 1.5% fleet efficiency improvement. The new rule was issued in March 2020. The Trump administration argued the rollback was required due to the increasing costs of cars on consumers that higher efficiencies would only make more expensive. The move was criticized by several environmentalists, Consumer Reports, as well as the state of California, as the ruling coincided with Trump's efforts to remove the waiver for California emissions exemptions.Following Joe Biden becoming president in 2021, he signed Executive Order 14057, "Catalyzing Clean Energy Industries and Jobs Through Federal Sustainability", which in addition to committing the federal government to implement clean transport options such as EVs, also committed to improving the fuel efficiency standards and reversing the Trump administration's actions., the EPA issued a new rule in December 2021 and to become enforceable by February 2022 that effectively restored the Obama-era standards, through decreasing the fleet-wide emissions target to a projected 161 g CO2/mi by the 2026 model year. Consumer ratings Air pollution score EPA's air pollution score represents the amount of health-damaging and smog-forming airborne pollutants the vehicle emits. Scoring ranges from 0 (worst) to 10 (best). The pollutants considered are nitrogen oxides (NOx), particulate matter (PM), carbon monoxide (CO), formaldehyde (HCHO), and various hydrocarbon measures – non-methane organic gases (NMOG), and non-methane hydrocarbons (NMHC), and total hydrocarbons (THC). This score does not include emissions of greenhouse gases (but see Greenhouse gas score, below). Greenhouse gas score EPA's greenhouse gas score reflects the amount of greenhouse gases a vehicle will produce over its lifetime, based on typical consumer usage. The scoring is from 0 to 10, where 10 represents the lowest amount of greenhouse gases. The Greenhouse gas score is determined from the vehicle's estimated fuel economy and its fuel type. The lower the fuel economy, the more greenhouse gas is emitted as a by-product of combustion. The amount of carbon dioxide emitted per liter or gallon burned varies by fuel type, since each type of fuel contains a different amount of carbon per gallon or liter. The ratings reflect carbon dioxide (CO2), nitrous oxide (N2O) and methane (CH4) emissions, weighted to reflect each gas's relative contribution to the greenhouse effect. California emission standards Under Section 209 of the Clean Air Act (CAA), California is given the ability to apply for special waivers to apply its own emission standards for new motor vehicles that are at least as stringent as the federal standards. California applies for this waiver through the EPA, which publishes the proposed standards for public review in the Federal Register. Based on its own review and public comments, the EPA then grants the waiver unless it has determined that California's requested standards were "arbitrary and capricious" in their findings, that the standards are not needed to "meet compelling and extraordinary conditions", or otherwise are inconsistent with other aspects of the CAA. Since the CAA's passage in 1967, California has applied and received more than fifty waivers, which include emission standards across various vehicle classes. Among these include two special sets of waivers: California initiated its zero-emission vehicle (ZEV) mandate in 1990. ZEV are defined as vehicles that have no exhaust or evaporative emissions of any regulated pollutant. Vehicle manufacturers were a percentage of their fleet meeting these ZEV standards over a long-term schedule (2% by model year 1998 at its start), but the mandate schedule has shifted based on the unplanned rate of technology advancement and costs, and as of 2020, its current target is to reach 8% ZEV by 2025 determined by fleet credits that account for vehicle range as well as contributions from any low-emission vehicles or plug-in hybrids. The EPA granted the initial request in 1990 and several updates. California had requested to regulate greenhouse gas emissions (GHG) at stricter levels than federal levels first in 2005 as part of its low-emission vehicle program. The EPA initially refused this waiver based on a decision from the United States Court of Appeals for the District of Columbia Circuit that determined the EPA did not have the authority to regulate GHG under the CAA; this ruling was challenged in the Supreme Court case Massachusetts v. Environmental Protection Agency (549 U.S. 497 (2007)) which ruled that the EPA did have this authority. In later actions, the EPA granted California its GHG waiver by 2009. State adoption of California Standards Section 177 of the CAA grants the ability for states to adopt California emission standards instead of federal ones. As of December 2021, the following states have adopted the California standards, including their standards for ZEV and GHG: Revocation of waivers under the Trump administration Former President Donald Trump stated his concern about California's stricter emission standards and their impact on the costs of manufacturing on the automobile industry, though some political analysts asserted this also tied in with Trump's conservative ideology conflicting with California's more liberal stance. Along with the Obama-era mileage goals, Trump had expressed his intent to revoke California's waivers early on in his presidency.Shortly after Ford, Volkswagen, Honda, and BMW announced their intentions to commit to the Obama-era mileage goals and California's emission standards across their fleets in July 2019, Trump announced his intention to rollback California's waivers. As part of Trump's Safer, Affordable, Fuel-Efficient (SAFE) program, the EPA and NHTSA proposed a new "One National Program Rule" that asserted that only the federal government may set emissions standards on September 19, 2019, as to have one consistent set of fuel emission and mileage standards across the country. This rule would include revoking the last set of California waivers that the EPA had granted California in 2013 for its GHG and ZEV programs. California retained its ability to set emission standards that address ozone-formation under the rule.Subsequent to this rule, California led a collation of 23 states to sue the NHTSA in California v. Chao (Case 1:19-cv-02826) in the D.C. District Court in September 2019, asserting the agency, in setting the rule, violated the intent of the CAA. The same group of states also filed suit against the EPA once the EPA issued the revoking of the 2013 waiver in November 2019, in California v. Wheeler (Case 19-1239) in the United States Court of Appeals for the District of Columbia Circuit to challenge the EPA's revoking. Further, both Minnesota and New Mexico, plaintiffs in both cases, stated they would take steps to adapt California's standards in their states as a result.Following the election of Joe Biden as president, the EPA and NHTSA moved to reverse the 2019 rule in April 2021, thus returning to the previous status quo for California. Following the EPA granting California its latest request for exemption, seventeen states sued the EPA in May 2022, arguing that because of the impact of California emission standards on vehicle manufacturing, the EPA's actions violate the equal sovereignty granted to the states by the Constitution, since it gives California more power than other states in setting emissions regulations. Non-road engines Non-road engines, including equipment and vehicles that are not operated on the public roadways, are used in an extremely wide range of applications, each involving great differences in operating characteristics and engine technology. Emissions from all non-road engines are regulated by categories.In the United States, the emission standards for non-road diesel engines are published in the US Code of Federal Regulations, Title 40, Part 89 (40 CFR Part 89). Tier 1–3 Standards were adopted in 1994 and was phased in between 1996 and 2000 for engines over 37 kW (50 hp). In 1998 the regulation included engines under 37 kW and introduced more stringent Tier 2 and Tier 3 standards which was scheduled to be phased in between 2000 and 2008. In 2004, US EPA introduced the more stringent Tier 4 standards which was scheduled to be phased in between 2008 and 2015. The testing cycles used for certification follow the ISO 8178 standards. Small engines Pollution from small engines, such as those used in gas-powered groundskeeping equipment reduces air quality. Emissions from small offroad engines are regulated by the EPA. Specific pollutants subject to limits include hydrocarbons, carbon monoxide, and nitrogen oxides. Existing vehicle emissions standard Testing of existing vehicle emissions, also known as vehicle inspection and maintenance programs (I/M) were introduced as part of the 1977 Amendments to the CAA. The 1970 Amendments introduced target NAAQS goals for air quality which were not met in many parts of the country. With the 1977 Amendments, the CAA required I/M programs in non-attainment states as part of their pollution prevent plans. For model years prior to 1996, emissions tests were performed using a chassis dynamometer-based test; The vehicle is driven so that the wheels of its main driven axle (front or rear) sit atop the dynamometer rotors, when then are unlocked to rotate freely. A special collections line is attached to the tailpipe, and simulated airflow is pushed across the engine to simulate vehicle movement. The test operator then presses the accelerator of the car through a fixed test schedule: the acceleration from the engine translates to force and torque that are measured through the dynamometer, simultaneously mapped against analysis of the emissions from the tailpipe. After completion of the schedule, the computerized system calculates the emissions from the car and determines if it meets the appropriate specification for its model year. Since model year 1994, all LDV and LDT manufactured for use in the United States are required to use the standard on-board diagnostic OBD-II system. This is a computerized system that continually monitors the performance of the engine and its emission control system. Instead of the dynamometer test, the operator hooks the OBD-II to a standard computer system which downloads the information from the computer. It will warn the operator if the OBD-II determines significant deviations from expected emissions control standards, indicating repairs may be needed. See also Regulation of greenhouse gases under the Clean Air Act Regulation on non-exhaust emissions AP 42 Compilation of Air Pollutant Emission Factors Emissions standard List of low emissions locomotives Motor vehicle emissions Portable Emissions Measurement System Timeline of major U.S. environmental and occupational health regulation Vehicle emissions control References External links EPA fuel economy guide for consumers EPA Green Vehicles guide EPA Climate Change guide Dieselnet: Cars and Light-Duty Trucks—Tier 1 Dieselnet: Cars and Light-Duty Trucks—Tier 2 Dieselnet: Cars and Light-Duty Trucks—California Emissions: Vehicle Emission Ratings Decoded
energy cannibalism
Energy cannibalism refers to an effect where rapid growth of a specific energy producing industry creates a need for energy that uses (or cannibalizes) the energy of existing power plants. Thus during rapid growth the industry as a whole produces no new energy because it is used to fuel the embodied energy of future power plants. Theoretical underpinnings In order for an “emission free” power plant to have a net negative impact on the greenhouse gas emissions of the energy supply it must produce enough emission-less electricity to offset both greenhouse gas emissions that it is directly responsible for (e.g. from concrete used to construct a nuclear power plant) and to offset the greenhouse gas emissions from electricity generated for its construction (e.g. if coal is used to generate electricity while constructing a nuclear power plant). This can become challenging during rapid growth of the “emission free” technology because it may require the construction of additional power plants of the older technology simply to power the construction of the new “emission free” technology. Derivation First, all the individual power plants of a specific type can be viewed as a single aggregate plant or ensemble and can be observed for its ability to mitigate emissions as it grows. This ability is first dependent on the energy payback time of the plant. Aggregate plants with a total installed capacity of C T {\displaystyle C_{T}} (in GW) produces: of electricity, where t {\displaystyle t} (in hours per year) is the fraction of time the plant is running at full capacity, C n {\displaystyle C_{n}} is the capacity of individual power plants and N {\displaystyle N} is the total number of plants. If we assume that the energy industry grows at a rate, r {\displaystyle r} , (in units of 1/year, e.g. 10% growth = 0.1/year) it will produce additional capacity at a rate (in GW/year) of After one year, the electricity produced would be The time that the individual power plant takes to pay for itself in terms of energy it needs over its life cycle, or the energy payback time, is given by the principal energy invested (over the entire life cycle), E P {\displaystyle E_{P}} , divided by energy produced (or fossil fuel energy saved) per year, E a n n {\displaystyle E_{ann}} . Thus if the energy payback time of a plant type is E P / E a n n {\displaystyle E_{P}/E_{ann}} , (in years,) the energy investment rate needed for the sustained growth of the entire power plant ensemble is given by the cannibalistic energy, E C a n {\displaystyle E_{Can}} : The power plant ensemble will not produce any net energy if the cannibalistic energy is equivalent to the total energy produced. So by setting equation (1) equal to (4) the following results: and by doing some simple algebra it simplifies to: So if one over the growth rate is equal to the energy payback time, the aggregate type of energy plant produces no net energy until growth slows down. Greenhouse gas emissions This analysis was for energy but the same analysis is true for greenhouse gas emissions. The principle greenhouse gas emissions emitted in order to provide for the power plant divided by the emissions offset every year must be equal to one over the growth rate of type of power to break even. Example For example, if the energy payback is 5 years and the capacity growth is 20%, no net energy is produced and no greenhouse gas emissions are offset if the only power input to the growth is fossil during the growth period. Applications to the nuclear industry In the article “Thermodynamic Limitations to Nuclear Energy Deployment as a Greenhouse Gas Mitigation Technology” the necessary growth rate, r, of the nuclear power industry was calculated to be 10.5%. This growth rate is very similar to the 10% limit due to energy payback example for the nuclear power industry in the United States calculated in the same article from a life cycle analysis for energy. These results indicate that any energy policies with the intention of driving down greenhouse gas emissions with deployment of additional nuclear reactors will not be effective unless the nuclear energy industry in the U.S. improves its efficiency. Some of the energy input into nuclear power plants occurs as production of concrete, which consumes little electricity from power plants. Applications to other industries As with nuclear power plants, hydroelectric dams are built with large amounts of concrete, which equate to considerable CO2 emissions, but little power usage. The long lifespan of hydroplants then contribute to a positive power ratio for a longer time than most other power plants.For the environmental impact of solar power, the energy payback time of a power generating system is the time required to generate as much energy as was consumed during production of the system. In 2000 the energy payback time of PV systems was estimated as 8 to 11 years and in 2006 this was estimated to be 1.5 to 3.5 years for crystalline silicon PV systems and 1-1.5 years for thin film technologies (S. Europe). Similarly, the energy return on investment (EROI) is to be considered.For wind power, energy payback is around one year. == References ==
fugitive emission
Fugitive emissions are leaks and other irregular releases of gases or vapors from a pressurized containment – such as appliances, storage tanks, pipelines, wells, or other pieces of equipment – mostly from industrial activities. In addition to the economic cost of lost commodities, fugitive emissions contribute to local air pollution and may cause further environmental harm. Common industrial gases include refrigerants and natural gas, while less common examples are perfluorocarbons, sulfur hexafluoride, and nitrogen trifluoride. Most occurrences of fugitive emissions are small, of no immediate impact, and difficult to detect. Nevertheless due to rapidly expanding activity, even the most strictly regulated gases have accumulated outside of industrial workings to reach measurable levels globally. Fugitive emissions include many poorly understood pathways by which the most potent and long-lived ozone depleting substances and greenhouse gases enter Earth's atmosphere.In particular, the build-up of a variety of man-made halogenated gases over the past several decades contributes more than 10% of the radiative forcing which drives global climate change as of year 2020. Moreover, the ongoing banking of small to large quantities of these gases within consumer appliances, industrial systems, and abandoned equipment throughout the world has all but guaranteed their future emissions for many years to come. Fugitive emissions of CFCs and HCFCs from legacy equipment and process uses have continued to hinder recovery of the stratospheric ozone layer in the years since most production was banned in accordance with the international Montreal Protocol.Similar legacy issues continue to be created at ever-increasing scale with the mining of fossil hydrocarbons, including gas venting and fugitive gas emissions from coal mines, oil wells, and gas wells. Economically depleted mines and wells may be abandoned or poorly sealed, while properly decommissioned facilities may experience emission increases following equipment failures or earth disturbances. Satellite monitoring systems are beginning to be developed and deployed to aid identification of the largest emitters, sometimes known as super-emitters. Emissions inventory A detailed inventory of greenhouse gas emissions from upstream oil and gas activities in Canada for the year 2000 estimated that fugitive equipment leaks had a global warming potential equivalent to the release of 17 million metric tonnes of carbon dioxide, or 12 percent of all greenhouse gases emitted by the sector, while another report put fugitive emissions at 5.2% of world greenhouse emissions in 2013. Venting of natural gas, flaring, accidental releases and storage losses accounted for an additional 38 percent.Fugitive emissions present other risks and hazards. Emissions of volatile organic compounds such as benzene from oil refineries and chemical plants pose a long term health risk to workers and local communities. In situations where large amounts of flammable liquids and gases are contained under pressure, leaks also increase the risk of fire and explosion. Pressurized equipment Leaks from pressurized process equipment generally occur through valves, pipe connections, mechanical seals, or related equipment. Fugitive emissions also occur at evaporative sources such as waste water treatment ponds and storage tanks. Because of the huge number of potential leak sources at large industrial facilities and the difficulties in detecting and repairing some leaks, fugitive emissions can be a significant proportion of total emissions. Though the quantities of leaked gases may be small, gases that have serious health or environmental impacts can cause a significant problem. Fenceline monitoring Fenceline monitoring techniques involve the use of samplers and detectors positioned at the fenceline of a facility. Several types of devices are used to provide data on a facility's fugitive emissions, including passive samplers with sorbent tubes, and "SPod" sensors that provide real-time data. Detection and repair To minimize and control leaks at process facilities operators carry out regular leak detection and repair activities. Routine inspections of process equipment with gas detectors can be used to identify leaks and estimate the leak rate in order to decide on appropriate corrective action. Proper routine maintenance of equipment reduces the likelihood of leaks. Because of the technical difficulties and costs of detecting and quantifying actual fugitive emissions at a site or facility, and the variability and intermittent nature of emission flow rates, bottom-up estimates based on standard emission factors are generally used for annual reporting purposes. New technologies New technologies are under development that could revolutionize the detection and monitoring of fugitive emissions. One technology, known as differential absorption lidar (DIAL), can be used to remotely measure concentration profiles of hydrocarbons in the atmosphere up to several hundred meters from a facility. DIAL has been used for refinery surveys in Europe for over 15 years. A pilot study carried out in 2005 using DIAL found that actual emissions at a refinery were fifteen times higher than those previously reported using the emission factor approach. The fugitive emissions were equivalent to 0.17% of the refinery throughput.Portable gas leak imaging cameras are also a new technology that can be used to improve leak detection and repair, leading to reduced fugitive emissions. The cameras use infrared imaging technology to produce video images in which invisible gases escaping from leak sources can be clearly identified. Types Natural gas See also Gas flare Greenhouse gas Leaks Volatile organic compound Fugitive gas emissions References Works cited IPCC AR5 WG1 (2013), Stocker, T.F.; et al. (eds.), Climate Change 2013: The Physical Science Basis. Working Group 1 (WG1) Contribution to the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5), Cambridge University Press. Climate Change 2013 Working Group 1 website. External links 2006 IPCC Guidelines for National Greenhouse Gas Inventories (see Section 4.2).
social cost of carbon
The social cost of carbon (SCC) is the marginal cost of the impacts caused by emitting carbon emissions at any point in time. The purpose of putting a price on a tonne of emitted carbon or CO2 is to aid people in evaluating whether adjustments to curb climate change are justified. The social cost of carbon is a calculation focused on taking corrective measures to a "State of Nature", where there is evidence of market failure. The Intergovernmental Panel on Climate Change suggested that a carbon price of $100/tCO2e could reduce global GHG emissions by at least half the 2019 level by 2030.Prominent 2021 models of saving for the social cost of carbon calculated a damage of more than $3000/tCO2e as a result of economy feedbacks and falling global GDP growth rates, while policy recommendations for a carbon price ranged from about $50t/CO2e to $200/tCO2e.: 22 The UN accounts for many greenhouse gas emissions that contribute to climate change and report global emissions on both a tCO2e and tC basis. A tentative sense check in 2021 by A. T. Parkinson using the simple Hobbesian ratio of historic data suggested a reasonable figure in 2016 could have been somewhere between $300tC and $400tC, when considering UN and OECD accounts. It was expected that the system under evaluation is fundamentally slow response with long-run dynamics relative to the other systems that humans typically interact, hence need not be estimated with any precision frequently. However, the ability of the general public to readily audit calculation of the indicator appears vital. From the liberal perspective, the social cost of carbon (a Commonwealth Cost of Carbon, in this instance) appears a candidate key performance indicator for the overall performance of world leaders. Calculating Calculating the SCC requires estimating the societal damages caused by anthropogenic greenhouse gas emissions. This includes manifestations of environmental degradation as public disorder, conflict and reduced activity. Utilitarian valuations for a people can be difficult because the impacts on ecosystems do not have a market price and any ideal observer lacks legitimacy over many nation states. However, Hobbesian valuations for the global commons can be relatively simple and yield quite different outcomes. Calculating Stability for the Global Commons Greenhouse gas emissions are an essential good that life cannot do without. The carbon cycle is more fundamental to existence than the sovereignty of nation states. Carbon management has relevance to all decisions, activities and governance frameworks. Climate change may be considered a relatively recent symptom of a rather ancient challenge for carbon-based humanity seeking contracts in good faith within a "State of Nature". Hence, in the liberal tradition, one would be concerned with higher priority considerations than rates of saving.A theory of the social contract put forward by Hobbes in Leviathan has universal relevance to the required evaluations of this "State of Nature". Following such theory, one simply divides historic enforcement costs by anthropogenic greenhouse gas emissions to arrive at past "Commonwealth Costs of Carbon". Historic enforcement costs may be the sum total of global military, public order and safety expenditure. These costs arise through imperfections in judgements, though are a necessity to assure the peoples covenant with the global commons. Everything else that the global commons provides is of benefit. This simple approach does not require an ideal observer. All peoples are treated equally and allocations of property rights have no influence on the peoples ability to estimate, as long as the necessary accounts are public.Such an approach could be adopted by a "Society of Peoples", living free of conflict within stable, lawful and democratic communities. With such a following, Commonwealth Costs of Carbon may be regarded the principle key performance indicators of statesmanship for a Commonwealth of Peoples, reflecting global ecosystem stability. It's important to note that the public policies of intergovernmental organisations, such as the Commonwealth of Nations or Commonwealth of Independent States, may not necessarily recognise a Commonwealth of Peoples approach. Calculating Rates of Saving for a People Alternatively, it has been popular to compare rates of saving over time involving a discount rate or time preference. These rates determine the weight placed on impacts occurring at different times, applying a theoretical model of inter-generational welfare developed by Ramsey.Such utilitarian estimates of the SCC come from integrated assessment models (IAM) which predict the effects of climate change under various scenarios and allow for calculation of monetized damages. One of the most widely used IAMs is the Dynamic Integrated model of Climate and the Economy (DICE).The DICE model, developed by William Nordhaus, makes provisions for the calculation of a social cost of carbon. The DICE model defines the SCC to be "equal to the economic impact of a unit of emissions in terms of t-period consumption as a numéraire".Other popular IAMs used to calculate the social cost of carbon include the Policy Analysis for Greenhouse Effect Model (PAGE) and the Climate Framework for Uncertainty, Negotiation, and Distribution (FUND).In the United States, the Trump Administration was criticised for using existing IAMs to calculate the SCC that lacked appropriate calculations for interactions between regions. For instance, climate catastrophes caused by climate change in one region may have a domino impact on the economy of neighboring regions or trading partners.The wide range of estimates is explained mostly by underlying uncertainties in the science of climate change including the climate sensitivity, which is a measure of the amount of global warming expected for a doubling in the atmospheric concentration of CO2, different choices of discount rate, treatment of equity, and how potential catastrophic impacts are estimated. The Interagency Working Group in the United States usually uses four values when calculating the cost. The values come from using a discount rate of 2.5%, 3%, and 5% from the integrated assessment models. The SCC that is being found must include the different probabilities based on what mitigation is being used for climate change that betters or worsens the environment. This is where the fourth value comes into play, because there can be lower-probability, but higher-impact outcomes from climate change. The fourth value zones in on the 3% discounts rate and is set to the 95th percentile when distributing the frequency estimates. In "The U.S. Government's Social Cost of Carbon Estimates after Their First Two Years: Pathways for Improvement", Kopp and Mignone suggest that these calculation rates do not reflect the multiple ways that humans can respond to climate change. They propose an alternative approach that should be considered by calculating through a cost-benefit optimization analysis based on if the public "panics" about climate change and implement mitigation policies accordingly. Discount rate What discount rate to use is "consequential and contentious" because it defines the relative value of present costs and future damages, an inherently ethical and political judgment. A 2015 survey of 200 general economists found that most preferred a rate between 1% and 3%. Some, like Nordhaus, advocate for a time discount rate that is pegged to the current average rate of time discount as estimated from market interest rates–this is spurious reasoning because intragenerational interest rates have nothing to do with the intergenerational ones in question. Others, like Stern, propose a much smaller discount rate because "normal" discount rates are skewed when applied over the time scales over which climate change acts. A 2015 survey of 1,100 economists who had published on climate change found that those who estimated discount rates preferred that they decline over time and that explicit ethical considerations be factored in. Carbon pricing recommendations According to economic theory, a carbon price should be set equal to the SCC. In reality, carbon tax and carbon emission trading only cover a limited number of countries and sectors, which is vastly below the optimal SCC. The social cost of carbon ranges from −$13.36 to $2386.91/tCO2, while the carbon pricing at present only ranges from $0.50 to $137.30/tCO2 in 2022. From a technological cost perspective, the 2018 IPCC report suggested that limiting global warming below 1.5 °C requires technology costs around $135 to $5500 in 2030 and $245 to $13000/tCO2 in 2050. This is more than three times higher than for a 2 °C limit. In 2021, the study "The social cost of carbon dioxide under climate-economy feedbacks and temperature variability" estimated even costs of more than $3000/tCO2e. A study published in September 2022 in Nature estimated the social cost of carbon (SCC) to be $185 per tonne of CO2—3.6 times higher than the U.S. government's then-current value of $51 per tonne.Large studies in the late 2010s estimated the social cost of carbon as high as $417/tCO2e, or as low as $54/tCO2e. Both those studies subsume wide ranges; the latter is a meta-study whose source estimates range from -$13.36/tCO2e to $2,386.91/tCO2e. Note that the costs derive not from the element carbon, but the molecule carbon dioxide. Each tonne of carbon dioxide consists of about 0.27 tonnes of carbon and 0.73 tonnes of oxygen. According to David Anthoff and Johannes Emmerling, the social cost of carbon can be expressed by the following equation: S C C x = ( ∂ w / ∂ c x 0 ) − 1 ∑ t = 0 T ∑ r = 1 R ∂ w / ∂ c {\displaystyle SCCx=(\partial w/\partial cx0)^{-}1\sum _{t=0}^{T}\sum _{r=1}^{R}\partial w/\partial c} .This equation represents how one additional ton of carbon dioxide impacts the environment and incorporates equity and social impact. Chen, Van der Beek, and Cloud inquire upon the benefits of incorporating a second measure of the externalities of carbon by accounting for both the social cost of carbon and risk cost of carbon. This technique involves accounting for the cost of risk on climate change goals. Matsuo and Schmidt suggest that carbon policies revolve around two renewable energy targets. They focus on bringing the cost down of renewable energy and growth of the industry. The problem with these objectives in policy is that prioritization can affect how the policy plays out. This can result in a negative impact on the social cost of carbon by affecting how renewable energy is incorporated into society. Newbery, Reiner, and Ritz discuss a carbon price floor as a means of attributing to the social cost of carbon. They discuss how incorporating a CPF in SCC can have a long-term effect of less coal usage, an increase in electricity pricing, and more innovation and investment in low-carbon alternatives. Yang et al. estimated the social cost of carbon under alternative socioeconomic pathways. According to their results, regional rivalries with increased trade friction can increase the social cost of carbon by a factor of 2 to 4. Use in investment decisions Organizations that take an integrated management approach are using the social cost of carbon to help evaluate investment decisions and guide long-term planning in order to consider the full extent of how their operations impact society and the environment. By placing a value on carbon emissions, decision makers can use this value to expand upon traditional financial decision-making tools and create new metrics for measuring the short and long-term outcomes of their actions. This means taking the triple bottom line a step further and promotes an integrated bottom line (IBL) approach. Prioritizing an IBL approach begins with changing the way we think about traditional financial measurements as these do not take into consideration the full extent of the short and long-term impacts of a decision or action. Instead, return on investment can be expanded to return on integration, internal rate of return can evolve into integrated rate of return and instead of focusing on net present value, companies can plan for integrated future value. By country The SCC is highly sensitive to socioeconomic narratives. Because carbon dioxide is a global externality, a liberal society would never want to set policy based on anything other than the global aggregate value (hence, treating all peoples as equal in their judgement). One might expect any international contracting in the interest of the global commons could be made in good faith. However, given a lack of trust between governments and popular adoption of other philosophical doctrine, the country-level or regional-level social cost of carbon is also calculated. Yang et al. calculated the regional social cost of carbon using regional cost-benefit IAM (RICE). Generally, SCCs in developing countries are much more sensitive to socioeconomic uncertainty and risk valuation - average SCCs in developing regions are 20 times higher than developed regions. Cost-benefit IAM requires more computational resources to provide SCC at the country level, so Ricke et al. calculate the social cost of carbon based on discounted future damage. Their estimation shows countries that incur large fractions of the global cost consistently include India, China, Saudi Arabia and the United States. United Kingdom The UK government has estimated social cost of carbon since 2002, when a Government Economic Service working paper Estimating the social cost of carbon emissions suggested £19/tCO2 within a range of £10 to £38/tCO2. This cost was set to rise at a rate of £0.27/tCO2 per year to reflect the increasing marginal cost of emissions. In 2009 the UK government conducted a review of the approach taken to developing carbon values. The conclusion of the review was to move to a "target-consistent‟ or "abatement cost" approach to carbon valuation rather than a "social cost of carbon" (SCC) approach. Following a cross-government review during 2020 and 2021, UK carbon valuation are further updated to reflect consistency to the global 1.5°C goal and its domestic targets. United States In February 2021 the US government set the social cost of carbon to $51 per tonne, based on a 3% discount rate, but it plans a more thorough review of the issue. However, in February 2022 a court ruled against the government and said the figure was invalid as only damage within the US could be included. In March 2022, a three-judge panel of the 5th Circuit Court of Appeals stayed his injunction, permitting continued use of the interim figure. The social cost of carbon is used in policymaking.Executive Order 12866 requires agencies to consider the costs and benefits of any potential regulations and, bearing in mind that some factors may be difficult to assign monetary value, only propose regulations whose benefits would justify the cost. Social cost of carbon estimates allow agencies to bring considerations of the impact of increased carbon dioxide emissions into cost-benefit analyses of proposed regulations. The United States government was not required to implement greenhouse gas emission requirements until after the 2007 court case Massachusetts v. EPA. The U.S. government struggled to implement greenhouse gas emission requirements due to the lack of an accurate social cost on carbon to guide policy making.Due to the varying estimates of the social cost of carbon, in 2009, the Office of Management and Budget (OMB) and the Council of Economic Advisers established the Interagency Working Group on the Social Cost of Greenhouse Gases (IWG) in an attempt to develop standards estimates of SCC for the use of federal agencies considering regulatory policies. This establishment was formerly named Interagency Working Group on the Social Cost of Carbon, but has now extended to include multiple greenhouse gasses. The IWG works closely with the National Academies of Sciences, Engineering, and Medicine when researching and creating an up to date report on the SCC. When developing the 2010 and 2013 social cost of carbon estimates, the U.S. Government Accountability Office used a consensus-based approach with working groups alongside of existing academic works, studies, and models. These created estimates for the social costs and benefits that government agencies could use when creating environmental policies. Members of the public are able to comment on the developed social cost of carbon.Along with the Office of Management and Budget (OMB) and the Council of Economic Advisers, six federal agencies worked in the working group. The agencies involved included, The Environmental Protection Agency (EPA), United States Department of Agriculture, United States Department of Commerce, United States Department of Energy, United States Department of Transportation (DOT), and the United States Department of Treasury. The Interagency Working Group analyzed and advised that policy surrounding the social cost of carbon must be implemented based on global impacts instead of domestic. Support for this expansion in scope stems from theories that climate change may lead to global migration and political and environmental destabilization that affects both the national security and economy of the United States, as well as its allies and trading partners. The social cost of carbon In the United States Government should be seen as a way to continuously update estimates with an end goal of public and scientific approval in order to make efficient environmental policy.The price being set for the social cost of carbon is dependent upon the administration in charge. While Obama was in office, the administration paved the way for the first estimate of putting a price on carbon emissions. The administration estimated that the cost would be $36 per tonne in 2015, $42 in 2020, and $46 in 2025.The Trump administration estimated between $1–$7 in economic damage in 2020. Trump's Executive Order 13783 mandated that SCC estimates be calculated based on guidelines from the 2003 OMB Circular A-4, rather than guidelines based on more recent climate science.In November 2022, the EPA issued an estimate of $190 per ton for 2020. Criticism Longer-term planning using the SCC has been criticized as being extremely uncertain, having to change over time and according to the level of emissions, and is claimed to be useless to policymakers of nation states as the Paris Agreement has a goal of 2 °C temperature rise.Calculating the SCC using the Ramsey model brings about a degree of uncertainty particularly due to unknown future economic growth and socioeconomic development paths. Societal preferences over development, international trade, and the potential for technological innovation, as well as national preferences regarding energy development should be taken into account. The discount rate, damage and pending climate system response also contribute to uncertainty. Furthermore, the figures produced from the SCC using the Ramsey model cause calculations to be produced on a range with the most commonly utilized number being the central case value (an average over the entire data set at a given discount rate). The SCC is no longer used for policy appraisal in the UK by the nation state or the policymakers of the EU.There are also important ethical concerns in applying Ramsey models to longer-term development plans. Nobody can legitimately command sufficient support as the "ideal observer" for the entire global ecosystem, needed to justify any modelling assumptions. There is widespread doubt as to whether utilitarianism is a philosophy fit for humanity and it is perceived as a distasteful approach to many. Issues with moderating Nozick's "utility monster", difficulty in forecasting rightful interventions to assure civil liberties, and the need for necessary allowances for a reasonable plurality in rightful beliefs amongst leaders are all severe shortcomings. None of these criticisms are relevant to use of Commonwealth Costs of Carbon for more immediate decision-making by Peoples of the Global Commons. History The concept of a social cost of carbon was first mooted by the Reagan administration of the United States in 1981. Federal agencies such as the Environmental Protection Agency and Department of Transportation began to develop other forms of social cost calculations from carbon during the George H. W. Bush administration. Furthermore, economic social cost from carbon was judicially mandated in cost-benefit analysis for new policy in 2008 following a decision by a federal appellate court. The year following in 2009 there was a call for a uniform calculation of social cost from carbon to be utilized by the government. Notes References IPCC de Coninck, H.; Revi, A.; Babiker, M.; Bertoldi, P.; et al. (2018). "Chapter 4: Strengthening and Implementing the Global Response" (PDF). Global Warming of 1.5 °C. pp. 313–443. === Other references ===
shell plc
Shell plc is a British multinational oil and gas company headquartered in London, England. Shell is a public limited company with a primary listing on the London Stock Exchange (LSE) and secondary listings on Euronext Amsterdam and the New York Stock Exchange. A core component of Big Oil, Shell is the second largest investor-owned oil and gas company in the world by revenue (after ExxonMobil), and among the world's largest companies out of any industry.Shell was formed in 1907 through the merger of Royal Dutch Petroleum Company of the Netherlands and The "Shell" Transport and Trading Company of the United Kingdom. The combined company rapidly became the leading competitor of the American Standard Oil and by 1920 Shell was the largest producer of oil in the world. Shell first entered the chemicals industry in 1929. Shell was one of the "Seven Sisters" which dominated the global petroleum industry from the mid-1940s to the mid-1970s. In 1964, Shell was a partner in the world's first commercial sea transportation of liquefied natural gas (LNG). In 1970, Shell acquired the mining company Billiton, which it subsequently sold in 1994 and now forms part of BHP. In recent decades gas has become an increasingly important part of Shell's business and Shell acquired BG Group in 2016.Shell is vertically integrated and is active in every area of the oil and gas industry, including exploration, production, refining, transport, distribution and marketing, petrochemicals, power generation, and trading. Shell has operations in over 99 countries, produces around 3.7 million barrels of oil equivalent per day and has around 44,000 service stations worldwide. As of 31 December 2019, Shell had total proved reserves of 11.1 billion barrels (1.76×109 m3) of oil equivalent. Shell USA, its principal subsidiary in the United States, is one of its largest businesses. Shell holds 44% of Raízen, a publicly-listed joint venture with Cosan, which is the third-largest Brazil-based energy company. In addition to the main Shell brand, the company also owns the Jiffy Lube, Pennzoil and Quaker State brands. Shell is a constituent of the FTSE 100 Index and had a market capitalisation of US$199 billion on 15 September 2022, the largest of any company listed on the LSE and the 44th-largest of any company in the world. By 2021 revenues, Shell is the second-largest investor-owned oil company in the world (after ExxonMobil), the largest company headquartered in the United Kingdom, the second-largest company headquartered in Europe (after Volkswagen), and the 15th largest company in the world. Until its unification in 2005 as Royal Dutch Shell plc, the firm operated as a dual-listed company, whereby the British and Dutch companies maintained their legal existence and separate listings but operated as a single-unit partnership. From 2005 to 2022, the company had its headquarters in The Hague, its registered office in London and had two types of shares (A and B). In January 2022, the firm merged the A and B shares, moved its headquarters to London, and changed its legal name to Shell plc. History Origins The Royal Dutch Shell Group was created in April 1907 through the amalgamation of two rival companies: the Royal Dutch Petroleum Company (Dutch: Koninklijke Nederlandse Petroleum Maatschappij) of the Netherlands and the Shell Transport and Trading Company Limited of the United Kingdom. It was a move largely driven by the need to compete globally with Standard Oil. The Royal Dutch Petroleum Company was a Dutch company founded in 1890 to develop an oilfield in Pangkalan Brandan, North Sumatra, and initially led by August Kessler, Hugo Loudon, and Henri Deterding. The "Shell" Transport and Trading Company (the quotation marks were part of the legal name) was a British company, founded in 1897 by Marcus Samuel, 1st Viscount Bearsted, and his brother Samuel Samuel. Their father had owned an antique company in Houndsditch, London, which expanded in 1833 to import and sell seashells, after which the company "Shell" took its name.For various reasons, the new firm operated as a dual-listed company, whereby the merging companies maintained their legal existence but operated as a single-unit partnership for business purposes. The terms of the merger gave 60 percent stock ownership of the new group to Royal Dutch, and 40 percent to Shell. Both became holding companies for Bataafsche Petroleum Maatschappij, containing the production and refining assets, and Anglo-Saxon Petroleum Company, containing the transport and storage assets. National patriotic sensibilities would not permit a full-scale merger or takeover of either of the two companies. The Dutch company, Koninklijke Nederlandsche Petroleum Maatschappij at The Hague, was in charge of production and manufacture. The British Anglo-Saxon Petroleum Company was based in London, to direct the transport and storage of the products.In 1912, Royal Dutch Shell purchased the Rothschilds' Russian oil assets in a stock deal. The Group's production portfolio then consisted of 53 percent from the East Indies, 29 percent from the Russian Empire, and 17 percent from Romania. 20th century During the First World War, Shell was the main supplier of fuel to the British Expeditionary Force. It was also the sole supplier of aviation fuel and supplied 80 percent of the British Army's TNT. It also volunteered all of its shipping to the British Admiralty.The German invasion of Romania in 1916 saw 17% of the group's worldwide production destroyed. In 1919, Shell took control of the Mexican Eagle Petroleum Company and in 1921 formed Shell-Mex Limited, which marketed products under the "Shell" and "Eagle" brands in the United Kingdom. During the Genoa Conference of 1922 Royal Dutch Shell was in negotiations for a monopoly over Soviet oilfields in Baku and Grosny, although the leak of a draft treaty led to breakdown of the talks. In 1929, Shell Chemicals was founded. By the end of the 1920s, Shell was the world's leading oil company, producing 11 percent of the world's crude oil supply and owning 10 percent of its tanker tonnage. Located in the north bank of the River Thames in London, Shell Mex House was completed in 1931, and was the head office for Shell's marketing activity worldwide. In 1932, partly in response to the difficult economic conditions of the Great Depression, Shell-Mex merged its UK marketing operations with those of BP (British Petroleum) to create Shell-Mex & BP, a company that traded until the brands separated in 1975. Royal Dutch Company ranked 79th among United States corporations in the value of World War II military production contracts. The 1930s saw Shell's Mexican assets seized by the local government. After the invasion of the Netherlands by Nazi Germany in 1940, the head office of the Dutch companies was moved to Curaçao. In 1945, Shell's Danish headquarters in Copenhagen, at the time being used by the Gestapo, was bombed by Royal Air Force De Havilland Mosquitoes in Operation Carthage.In 1937, Iraq Petroleum Company (IPC), 23.75 percent owned by Royal Dutch Shell plc, signed an oil concession agreement with the Sultan of Muscat. In 1952, IPC offered financial support to raise an armed force that would assist the Sultan in occupying the interior region of Oman, an area that geologists believed to be rich in oil. This led to the 1954 outbreak of the Jebel Akhdar War in Oman that lasted for more than 5 years. Around 1952, Shell was the first company to purchase and use a computer in the Netherlands. The computer, a Ferranti Mark 1*, was assembled and used at the Shell laboratory in Amsterdam. In 1970, Shell acquired the mining company Billiton, which it subsequently sold in 1994.In the 1990s, protesters criticised the company's environmental record, particularly the possible pollution caused by the proposed disposal of the Brent Spar platform into the North Sea. Despite support from the UK government, Shell reversed the decision under public pressure but maintained that sinking the platform would have been environmentally better. Shell subsequently published an unequivocal commitment to sustainable development, supported by executive speeches reinforcing this commitment. Shell was subsequently criticised by the European Commission and five European Union members after deciding to leave part of its decommissioned oil rigs standing in the North Sea. Shell argued that removing them would be too costly and risky. Germany said that the estimated 11,000 tonnes of raw oil and toxins remaining in the rigs would eventually seep into the sea, and called it a 'ticking timebomb'.On 15 January 1999, off the Argentinian town of Magdalena, Buenos Aires, the Shell tanker Estrella pampeana collided with a German cargo ship, emptying its contents into the lake, polluting the environment, drinkable water, plants and animals. Over a decade after the spill, a referendum held in Magdalena determined the acceptance of a US$9.5 million compensatory payout from Shell. Shell denied responsibility for the spill, but an Argentine court ruled in 2002 that the corporation was responsible. 21st century In 2002, Shell acquired Pennzoil-Quaker State through its American division for $22 USD per share, or about $1.8 billion USD. Through its acquisition of Pennzoil, Shell became a descendant of Standard Oil. With its acquisition, Shell inherited multiple auto part brands including Jiffy Lube, Rain-X, and Fix-a-Flat. The company was notably late in its acquisition as seen by journalists, with Shell seen as streamlining its assets around the same time of other major mergers and acquisitions in the industry, such as BP's purchase of Amoco and the merger of Exxon and Mobil.In 2004, Shell overstated its oil reserves, resulting in loss of confidence in the group, a £17 million fine by the Financial Services Authority and the departure of the chairman Philip Watts. A lawsuit resulted in the payment of $450 million to non-American shareholders in 2007.As a result of the scandal, the corporate structure was simplified. Two classes of ordinary shares, A (code RDSA) and B (code RDSB), identical but for the tax treatment of dividends, were issued for the company. In November 2004, following a period of turmoil caused by the revelation that Shell had been overstating its oil reserves, it was announced that the Shell Group would move to a single capital structure, creating a new parent company to be named Royal Dutch Shell plc, with its primary listing on the LSE, a secondary listing on Euronext Amsterdam, its headquarters and tax residency in The Hague, Netherlands and its registered office in London. The company was already incorporated in 2002 as Forthdeal Limited, a shelf corporation incorporated by Swift Incorporations Limited and Instant Companies Limited, both based in Bristol. The unification was completed on 20 July 2005 and the original owners delisted their companies from the respective exchanges. On 20 July 2005, the Shell Transport & Trading Company plc was delisted from the LSE, whereas, Royal Dutch Petroleum Company from the New York Stock Exchange on 18 November 2005. The shares of the company were issued at a 60/40 advantage for the shareholders of Royal Dutch in line with the original ownership of the Shell Group.During the 2009 Iraqi oil services contracts tender, a consortium led by Shell (45%) and which included Petronas (30%) was awarded a production contract for the "Majnoon field" in the south of Iraq, which contains an estimated 12.6 billion barrels (2.00×109 m3) of oil. The "West Qurna 1 field" production contract was awarded to a consortium led by ExxonMobil (60%) and included Shell (15%).In February 2010, Shell and Cosan formed a 50:50 joint-venture, Raízen, comprising all of Cosan's Brazilian ethanol, energy generation, fuel distribution and sugar activities, and all of Shell's Brazilian retail fuel and aviation distribution businesses. In March 2010, Shell announced the sale of some of its assets, including its liquefied petroleum gas (LPG) business, to meet the cost of a planned $28bn capital spending programme. Shell invited buyers to submit indicative bids, due by 22 March, with a plan to raise $2–3bn from the sale. In June 2010, Shell agreed to acquire all the business of East Resources for a cash consideration of $4.7 billion. The transaction included East Resources' tight gas fields.Over the course of 2013, the corporation began the sale of its US shale gas assets and canceled a US$20 billion gas project that was to be constructed in the US state of Louisiana. A new CEO Ben van Beurden was appointed in January 2014, prior to the announcement that the corporation's overall performance in 2013 was 38 percent lower than in 2012—the value of Shell's shares fell by 3 percent as a result. Following the sale of the majority of its Australian assets in February 2014, the corporation plans to sell a further US$15 billion worth of assets in the period leading up to 2015, with deals announced in Australia, Brazil and Italy.Shell announced on 8 April 2015 it had agreed to buy BG Group for £47 billion (US$70 billion), subject to shareholder and regulatory approval. The acquisition was completed in February 2016, resulting in Shell surpassing Chevron Corporation and becoming the world's second largest non-state oil company.On 7 June 2016, Shell announced that it would build an ethane cracker plant near Pittsburgh, Pennsylvania, after spending several years doing an environmental cleanup of the proposed plant's site.In January 2017, Shell agreed to sell £2.46bn worth of North Sea assets to oil exploration firm Chrysaor. In 2017, Shell sold its oil sands assets to Canadian Natural Resources in exchange of approximately 8.8% stake in that company. In May 2017, it was reported that Shell plans to sell its shares in Canadian Natural Resources fully exiting the oil sands business.On 5 November 2017, the Paradise Papers, a set of confidential electronic documents relating to offshore investment, revealed that Argentine Energy Minister Juan José Aranguren was revealed to have managed the offshore companies 'Shell Western Supply and Trading Limited' and 'Sol Antilles y Guianas Limited', both subsidiaries of Shell. One is the main bidder for the purchase of diesel oil by the government through the state owned CAMMESA (Compañía Administradora del Mercado Mayorista Eléctrico).On 30 April 2020, Shell announced that it would cut its dividend for the first time since the Second World War, due to the oil price collapse following the reduction in oil demand during the COVID-19 pandemic. Shell stated that their net income adjusted for the cost of supply dropped to US$2.9 billion in three months to 31 March. This compared with US$5.3 billion in the same period the previous year. On 30 September 2020, the company said that it would cut up to 9,000 jobs as a result of the economic effects caused by the pandemic and announced a "broad restructuring". In December 2020, Shell forecast another write-down of $3.5-4.5 billion for the fourth quarter due to lower oil prices, following $16.8 billion of impairment in the second quarter.In February 2021, Shell announced a loss of $21.7 billion in 2020 due to the COVID-19 pandemic, despite reducing its operating expenses by 12%, or $4.5 billion, according to a Morningstar analysis cited by Barron's.In November 2021, Shell announced that it is planning to relocate their headquarters to London, abandon its dual share structure, and change its name from Royal Dutch Shell plc to Shell plc. The company's name change was registered in the Companies House on 21 January 2022.In December 2021, Shell pulled out of the Cambo oil field, off the Shetland Islands, claiming that "the economic case for investment in this project is not strong enough at this time, as well as having the potential for delays". The proposed oilfield had been the subject of intense campaigning by environmentalists in the run-up to the COP26 UN climate summit in Glasgow in November 2021.On 4 March 2022, during the Russian invasion of Ukraine and in the midst of the growing boycott of Russian economy and related divestments, Shell bought a cargo of discounted Russian crude oil. The next day, following criticism from Ukraine's Foreign Minister Dmytro Kuleba, Shell defended the purchase as a short term necessity, but also announced that it intended to reduce such purchases, and it would put profits from any Russian oil it purchases into a fund that would go towards humanitarian aid to Ukraine. On 8 March, Shell announced that it would stop buying Russian oil and gas and close its service stations in the country.In 2022, the major oil and gas companies, including Shell, reported sharp rises in interim revenues and profits. In fact, this rise in profit for Shell was so sharp, that 2022 was the company's best year, as Shell recorded double the profits from 2021, and the highest profit in its entire history. Corporate affairs Management On 4 August 2005, the board of directors announced the appointment of Jorma Ollila, chairman and CEO of Nokia at the time, to succeed Aad Jacobs as the company's non-executive chairman on 1 June 2006. Ollila is the first Shell chairman to be neither Dutch nor British. Other non-executive directors include Maarten van den Bergh, Wim Kok, Nina Henderson, Lord Kerr, Adelbert van Roxe, and Christine Morin-Postel.Since 3 January 2014, Ben van Beurden has been CEO of Shell. His predecessor was Peter Voser who became CEO of Shell on 1 July 2009.Following a career at the corporation, in locations such as Australia and Africa, Ann Pickard was appointed as the executive vice president of the Arctic at Royal Dutch Shell, a role that was publicized in an interview with McKinsey & Company in June 2014.In January 2023, Wael Sawan succeeded Ben van Beurden as CEO. Historical leadership Name and logo The name Shell is linked to The "Shell" Transport and Trading Company. In 1833, the founder's father, Marcus Samuel Sr., founded an import business to sell seashells to London collectors. When collecting seashell specimens in the Caspian Sea area in 1892, the younger Samuel realised there was potential in exporting lamp oil from the region and commissioned the world's first purpose-built oil tanker, the Murex (Latin for a type of snail shell), to enter this market; by 1907 the company had a fleet. Although for several decades the company had a refinery at Shell Haven on the Thames, there is no evidence of this having provided the name.The Shell logo is one of the most familiar commercial symbols in the world. This logo is known as the "pecten" after the sea shell Pecten maximus (the giant scallop), on which its design is based. The yellow and red colours used are thought to relate to the colours of the flag of Spain, as Shell built early service stations in California, previously a Spanish colony. The current revision of the logo was designed by Raymond Loewy in 1971.The slash was removed from the name "Royal Dutch/Shell" in 2005, concurrent with moves to merge the two legally separate companies (Royal Dutch and Shell) to the single legal entity which exists today.On 15 November 2021, Royal Dutch Shell plc announced plans to change its name to Shell plc. Logo evolution Operations Business groupings Shell is organised into four major business groupings: Upstream – manages the upstream business. It searches for and recovers crude oil and natural gas and operates the upstream and midstream infrastructure necessary to deliver oil and gas to the market. Its activities are organised primarily within geographic units, although there are some activities that are managed across the business or provided through support units. Integrated Gas and New Energies – manages to liquefy natural gas, converting gas to liquids and low-carbon opportunities. Downstream – manages Shell's manufacturing, distribution, and marketing activities for oil products and chemicals. Manufacturing and supply include refinery, supply, and shipping of crude oil. Projects and technology – manages the delivery of Shell's major projects, provides technical services and technology capability covering both upstream and downstream activities. It is also responsible for providing functional leadership across Shell in the areas of health, safety and environment, and contracting and procurement. Oil and gas activities Shell's primary business is the management of a vertically integrated oil company. The development of technical and commercial expertise in all stages of this vertical integration, from the initial search for oil (exploration) through its harvesting (production), transportation, refining and finally trading and marketing established the core competencies on which the company was founded. Similar competencies were required for natural gas, which has become one of the most important businesses in which Shell is involved, and which contributes a significant proportion of the company's profits. While the vertically integrated business model provided significant economies of scale and barriers to entry, each business now seeks to be a self-supporting unit without subsidies from other parts of the company.Traditionally, Shell was a heavily decentralised business worldwide (especially in the downstream) with companies in over 100 countries, each of which operated with a high degree of independence. The upstream tended to be far more centralised with much of the technical and financial direction coming from the central offices in The Hague. The upstream oil sector is also commonly known as the "exploration and production" sector.Downstream operations, which now also includes the chemicals business, generate the majority of Shell's profits worldwide and is known for its global network of more than 40,000 petrol stations and its various oil refineries. The downstream business, which in some countries also included oil refining, generally included a retail petrol station network, lubricants manufacture and marketing, industrial fuel and lubricants sales, and a host of other product/market sectors such as LPG and bitumen. The practice in Shell was that these businesses were essentially local and that they were best managed by local "operating companies" – often with middle and senior management reinforced by expatriates. Sponsorships Shell has a long history of motorsport sponsorship, most notably Scuderia Ferrari (1951–1964, 1966–1973 and 1996-present), BRM (1962–1966 and 1968–1972), Scuderia Toro Rosso (2007–2013 and 2016), McLaren (1967–1968 and 1984–1994), Lotus (1968–1971), Ducati Corse (since 1999), Team Penske (2011–present), Hyundai Motorsport (since 2005), AF Corse, Risi Competizione, BMW Motorsport (2015–present with also Pennzoil) and Dick Johnson Racing (1987-2004 and 2017–present).Starting in 2023, Shell will become the official fuel for IndyCar Series, supplying E100 race fuel for all teams. Operations by region Arctic Kulluk oil rig Following the purchase of an offshore lease in 2005, Shell initiated its US$4.5 billion Arctic drilling program in 2006, after the corporation purchased the "Kulluk" oil rig and leased the Noble Discoverer drillship. At inception, the project was led by Pete Slaiby, a Shell executive who had previously worked in the North Sea. However, after the purchase of a second offshore lease in 2008, Shell only commenced drilling work in 2012, due to the refurbishment of rigs, permit delays from the relevant authorities and lawsuits. The plans to drill in the Arctic led to protests from environmental groups, particularly Greenpeace; furthermore, analysts in the energy field, as well as related industries, also expressed skepticism due to perceptions that drilling in the region is "too dangerous because of harsh conditions and remote locations".Further problems hampered the Arctic project after the commencement of drilling in 2012, as Shell dealt with a series of issues that involved air permits, Coast Guard certification of a marine vessel, and severe damage to essential oil-spill equipment. Additionally, difficult weather conditions resulted in the delay of drilling during mid-2012 and the already dire situation was exacerbated by the "Kulluk" incident at the end of the year. Shell had invested nearly US$5 billion by this stage of the project.As the Kulluk oil rig was being towed to the American state of Washington to be serviced in preparation for the 2013 drilling season, a winter storm on 27 December 2012 caused the towing crews, as well as the rescue service, to lose control of the rig. As of 1 January 2013, the Kulluk was grounded off the coast Sitkalidak Island, near the eastern end of Kodiak Island. Following the accident, a Fortune magazine contacted Larry McKinney, the executive director at the Harte Research Institute for Gulf of Mexico Studies at Texas A&M, and he explained that "A two-month delay in the Arctic is not a two-month delay ... A two-month delay could wipe out the entire drilling season."It was unclear if Shell would recommence drilling in mid-2013, following the "Kulluk" incident, and, in February 2013, the corporation stated that it would "pause" its closely watched drilling project off the Alaskan coast in 2013, and will instead prepare for future exploration. In January 2014, the corporation announced the extension of the suspension of its drilling program in the Arctic, with chief executive van Beurden explaining that the project is "under review" due to both market and internal issues.A June 2014 interview with Pickard indicated that, following a forensic analysis of the problems encountered in 2012, Shell will continue with the project and Pickard stated that she perceives the future of the corporation activity in the Arctic region as a long-term "marathon". Pickard stated that the forensic "look back" revealed "there was an on/off switch" and further explained: In other words, don't spend the money unless you're sure you're going to have the legal environment to go forward. Don't spend the money unless you're sure you're going to have the permit. No, I can't tell you that I'm going to have that permit until June, but we need to plan like we're going to have that permit in June. And so probably the biggest lesson is to make sure we could smooth out the on/off switches wherever we could and take control of our own destiny. Based upon the interview with Pickard, Shell is approaching the project as an investment that will reap energy resources with a lifespan of around 30 years.According to the Bureau of Ocean Energy Management report in 2015 the chances of a major spill in a deep-sea Arctic drilling is 75% before century's end. Kodiak Island In 2010, Greenpeace activists painted "No Arctic Drilling" using spilled BP oil on the side of a ship in the Gulf of Mexico that was en route to explore for Arctic oil for Shell. At the protest, Phil Radford of Greenpeace called for "President Obama [to] ban all offshore oil drilling and call for an end to the use of oil in our cars by 2030."On 16 March 2012, 52 Greenpeace activists from five different countries boarded Fennica and Nordica, multipurpose icebreakers chartered to support Shell's drilling rigs near Alaska. Around the same time period, a reporter for Fortune magazine spoke with Edward Itta, an Inupiat leader and the former mayor of the North Slope Borough, who expressed that he was conflicted about Shell's plans in the Arctic, as he was concerned that an oil spill could destroy the Inupiat peoples hunting-and-fishing culture, but his borough also received major tax revenue from oil and gas production; additionally, further revenue from energy activity was considered crucial to the future of the living standard in Itta's community.In July 2012, Greenpeace activists shut down 53 Shell petrol stations in Edinburgh and London in a protest against the company's plans to drill for oil in the Arctic. Greenpeace's "Save the Arctic" campaign aims to prevent oil drilling and industrial fishing in the Arctic by declaring the uninhabited area around the North Pole a global sanctuary.A review was announced after the Kulluk oil rig ran aground near Kodiak Island in December 2012.In response, Shell filed lawsuits to seek injunctions from possible protests, and Benjamin Jealous of the NAACP and Radford argued that the legal action was "trampling Americans' rights." According to Greenpeace, Shell lodged a request with Google to take down video footage of a Greenpeace protest action that occurred at the Shell-sponsored Formula One (F1) Belgian Grand Prix on 25 August 2013, in which "SaveTheArctic.org" banners appear at the winners' podium ceremony. In the video, the banners rise up automatically—activists controlled their appearance with the use of four radio car antennas—revealing the website URL, alongside an image that consists of half of a polar bear's head and half of the Shell logo.Shell then announced a "pause" in the timeline of the project in early 2013 and, in September 2015, the corporation announced the extension of the suspension of its drilling program in the Arctic. Polar Pioneer rig A June 2014 interview with the corporation's new executive vice president of the Arctic indicated that Shell will continue with its activity in the region.In Seattle protests began in May 2015 in response to the news that the Port of Seattle made an agreement with Shell to berth rigs at the Port's Terminal 5 during the off-season of oil exploration in Alaskan waters. The arrival of Shell's new Arctic drilling vessel, Polar Pioneer (IMO number: 8754140), a semi-submersible offshore drilling rig, was greeted by large numbers of environmental protesters paddling kayaks in Elliott Bay.On 6 May 2015, it was reported that during a coast guard inspection of Polar Pioneer, a piece of anti-pollution gear failed, resulting in fines and delay of the operation. Oil executives from Total and Eni interviewed by the New York Times, expressed scepticism about Shell's new ambitions for offshore drilling in the Arctic, and cited economic and environmental hurdles. ConocoPhillips and Equinor (formerly Statoil) suspended Arctic drilling earlier, after Shell's failed attempt in 2012. Australia On 20 May 2011, Shell's final investment decision for the world's first floating liquefied natural gas (FLNG) facility was finalized following the discovery of the remote offshore Prelude field—located off Australia's northwestern coast and estimated to contain about 3 trillion cubic feet of natural gas equivalent reserves—in 2007. FLNG technology is based on liquefied natural gas (LNG) developments that were pioneered in the mid-20th century and facilitates the exploitation of untapped natural gas reserves located in remote areas, often too small to extract any other way.The floating vessel to be used for the Prelude field, known as Prelude FLNG, is promoted as the longest floating structure in the world and will take in the equivalent of 110,000 barrels of oil per day in natural gas—at a location 200 km (125 miles) off the coast of Western Australia—and cool it into liquefied natural gas for transport and sale in Asia. The Prelude is expected to start producing LNG in 2017—analysts estimated the total cost of construction at more than US$12 billion.Following the decision by the Shell fuel corporation to close its Geelong Oil Refinery in Australia in April 2013, a third consecutive annual loss was recorded for Shell's Australian refining and fuel marketing assets. Revealed in June 2013, the writedown is worth A$203 million and was preceded by a A$638m writedown in 2012 and a A$407m writedown in 2011, after the closure of the Clyde Refinery in Sydney, Australia.In February 2014, Shell sold its Australian refinery and petrol stations for US$2.6 billion (A$2.9 billion) to Swiss company Vitol.At the time of the downstream sale to Vitol, Shell was expected to continue investment into Australian upstream projects, with projects that involve Chevron Corp., Woodside Petroleum and Prelude. In June 2014, Shell sold 9.5% of its 23.1% stake in Woodside Petroleum and advised that it had reached an agreement for Woodside to buy back 9.5% of its shares at a later stage. Shell became a major shareholder in Woodside after a 2001 takeover attempt was blocked by then federal Treasurer Peter Costello and the corporation has been open about its intention to sell its stake in Woodside as part of its target to shed assets. At a general body meeting, held on 1 August 2014, 72 percent of shareholders voted to approve the buy-back, short of the 75 percent vote that was required for approval. A statement from Shell read: "Royal Dutch Shell acknowledges the outcome of Woodside Petroleum Limited's shareholders' negative vote on the selective buy-back proposal. Shell is reviewing its options in relation to its remaining 13.6 percent holding." Brunei Brunei Shell Petroleum (BSP) is a joint venture between the Government of Brunei and Shell. The British Malayan Petroleum Company (BMPC), owned by Royal Dutch Shell, first found commercial amounts of oil in 1929. It currently produces 350,000 barrels of oil and gas equivalent per day. BSP is the largest oil and gas company in Brunei, a sector which contributes 90% of government revenue. In 1954, the BMPC in Seria had a total of 1,277 European and Asian staff. China The company has upstream operations in unconventional oil and gas in China. Shell has a joint venture with PetroChina at the Changbei tight gas field in Shaanxi, which has produced natural gas since 2008. The company has also invested in exploring for shale oil in Sichuan. The other unconventional resource which Shell invested in in China was shale. The company was an early entrant in shale oil exploration in China but scaled down operations in 2014 due to difficulties with geology and population density. It has a joint venture to explore for oil shale in Jilin through a joint venture with Jilin Guangzheng Mineral Development Company Limited. Hong Kong Shell has been active in Hong Kong for a century, providing Retail, LPG, Commercial Fuel, Lubricants, Bitumen, Aviation, Marine and Chemicals services, and products. Shell also sponsored the first Hong Kong-built aircraft, Inspiration, for its around-the-world trip. India Shell India has inaugurated its new lubricants laboratory at its Technology Centre in Bangalore. Ireland Shell first started trading in Ireland in 1902. Shell E&P Ireland (SEPIL) (previously Enterprise Energy Ireland) is an Irish exploration and production subsidiary of Royal Dutch Shell. Its headquarters are on Leeson Street in Dublin. It was acquired in May 2002. Its main project is the Corrib gas project, a large gas field off the northwest coast, for which Shell has encountered controversy and protests in relation to the onshore pipeline and licence terms.In 2005, Shell disposed of its entire retail and commercial fuels business in Ireland to Topaz Energy Group. This included depots, company-owned petrol stations and supply agreements stations throughout the island of Ireland. The retail outlets were re-branded as Topaz in 2008/9.The Topaz fuel network was subsequently acquired in 2015 by Couchetard and these stations began re-branding to Circle K in 2018. Malaysia Shell discovered the first oil well in Borneo in 1910, in Miri, Sarawak. Today, the oil well is a state monument known as the Grand Old Lady. In 1914, following this discovery, Shell built Borneo's first oil refinery and laid a submarine pipeline in Miri. Nigeria Shell began production in Nigeria in 1958. In Nigeria, Shell told US diplomats that it had placed staff in all the main ministries of the government. Shell continues however upstream activities/extracting crude oil in the oil-rich Niger Delta as well as downstream/commercial activities in South Africa. In June 2013, the company announced a strategic review of its operations in Nigeria, hinting that assets could be divested. In August 2014, the company disclosed it was in the process of finalizing the sale of its interests in four Nigerian oil fields. On 29 January 2021 a Dutch court ruled that Shell was responsible for multiple oil leaks in Nigeria.The actions of companies like Shell has led to extreme environmental issues in the Niger Delta. Many pipelines in the Niger Delta owned by Shell are old and corroded. Shell has acknowledged its responsibility for keeping the pipelines new but has also denied responsibility for environmental causes. The heavy contamination of the air, ground and water with toxic pollutants by the oil industry in the Niger Delta is often used as an example of ecocide. This has led to mass protests from the Niger Delta inhabitants, Amnesty International, and Friends of the Earth the Netherlands against Shell. It has also led to action plans to boycott Shell by environmental and human rights groups. In January 2013, a Dutch court rejected four out of five allegations brought against the firm over oil pollution in the Niger Delta but found a subsidiary guilty of one case of pollution, ordering compensation to be paid to a Nigerian farmer. Nordic countries On 27 August 2007, Shell and Reitan Group, the owner of the 7-Eleven brand in Scandinavia, announced an agreement to re-brand some 269 service stations across Norway, Sweden, Finland and Denmark, subject to obtaining regulatory approvals under the different competition laws in each country. In April 2010 Shell announced that the corporation is in process of trying to find a potential buyer for all of its operations in Finland and is doing similar market research concerning Swedish operations. In October 2010 Shell's gas stations and the heavy vehicle fuel supply networks in Finland and Sweden, along with a refinery located in Gothenburg, Sweden were sold to St1, a Finnish energy company, more precisely to its major shareholding parent company Keele Oy. North America Through most of Shell's early history, Shell USA business in the United States was substantially independent. Its stock was traded on the NYSE, and the group's central office had little direct involvement in running the operation. However, in 1984, Shell made a bid to purchase those shares of Shell Oil Company it did not own (around 30%) and, despite opposition from some minority shareholders which led to a court case, Shell completed the buyout for a sum of $5.7 billion. Philippines Royal Dutch Shell operates in the Philippines under its subsidiary, Pilipinas Shell Petroleum Corporation or PSPC. Its headquarters is in Makati and it has facilities in the Pandacan oil depot and other key locations.In January 2010, the Bureau of Customs claimed 7.34 billion pesos worth of unpaid excise taxes against Pilipinas Shell for importing Catalytic cracked gasoline (CCG) and light catalytic cracked gasoline (LCCG) stating that those imports are bound for tariff charges.In August 2016, Pilipinas Shell filed an application to sell US$629 million worth of primary and secondary shares to the investing public (registration statement) with the SEC. This was a prelude to filing its IPO listing application with the Philippine Stock Exchange. On 3 November 2016 the Pilipinas Shell Petroleum Corporation was officially listed on the Philippine Stock Exchange under the ticker symbol SHLPH after they held its initial public offering on 19 to 25 October of the same year.Due to the economic slowdown caused by the COVID-19 pandemic on the global, regional and local economies, continually low refining margins, and competition with imported refined products, the management of Pilipinas Shell announced in August 2020 that the 110,000 bbl/d refinery in Tabangao, Batangas, which started operations in 1962, will be shutting down permanently and turned into an import terminal instead. Russia In February 2022, Shell exited all its joint ventures with Gazprom because of the 2022 Russian invasion of Ukraine and, in March 2022, Shell announced that it would stop buying oil from Russia and close all its service stations there. In April 2022, it emerged that Shell was to book up to $5 billion in impairment charges from exiting its interests in Russia. Singapore Singapore is the main centre for Shell's petrochemical operations in the Asia Pacific region. Shell Eastern Petroleum limited (SEPL) have their refinery located in Singapore's Pulau Bukom island. They also operate as Shell Chemicals Seraya in Jurong Island. In November 2020, Shell announced that, as part of efforts to curtail pollution emissions, it will cut its oil-processing capacity in Singapore. United Kingdom In the UK sector of the North Sea Shell employs around 4,500 staff in Scotland as well as an additional 1,000 service contractors: however in August 2014 it announced it was laying off 250 of them, mainly in Aberdeen. Shell paid no UK taxes on its North Sea operations over the period 2018 to 2021. Alternative energy In the early 2000s Shell moved into alternative energy and there is now an embryonic "Renewables" business that has made investments in solar power, wind power, hydrogen, and forestry. The forestry business went the way of nuclear, coal, metals and electricity generation, and was disposed of in 2003. In 2006 Shell paid SolarWorld to take over its entire solar business and in 2008, the company withdrew from the London Array which when built was the world's largest offshore wind farm.Shell also is involved in large-scale hydrogen projects. HydrogenForecast.com describes Shell's approach thus far as consisting of "baby steps", but with an underlying message of "extreme optimism". In 2015, the company announced plans to install hydrogen fuel pumps across Germany, planning on having 400 locations in operation by 2023.Shell holds 44% of Raízen, a joint venture with Brazilian sugarcane producer Cosan which is the third-largest Brazil-based energy company by revenues and a major producer of ethanol. In 2015, the company partnered with Brazilian start-up company Insolar to install solar panels in Rio de Janeiro to deliver electricity to the Santa Marta neighbourhood.Shell is the operator and major shareholder of The Shell Canada Quest Energy project, based within the Athabasca Oil Sands Project, located near Fort McMurray, Alberta. It holds a 60% share, alongside Chevron Canada Limited, which holds 20%, and Marathon Canadian Oil Sands Holding Limited, which holds the final 20%. Commercial operations launched in November 2015. It was the world's first commercial-scale oil and sand carbon capture storage (CCS) project. It is expected to reduce CO2 emissions in Canada by 1.08 million tonnes per year.In December 2016, Shell won the auction for the 700 MW Borssele III & IV offshore wind farms at a price of 5.45 c/kWh, beating 6 other consortia. In June 2018, it was announced that the company and its co-investor Partners Group had secured $1.5bn for the project, which also involves Eneco, Van Oord, and Mitsubishi/DGE. In October 2017, it bought Europe's biggest vehicle charging network, "NewMotion."In November 2017, Shell's CEO Ben van Beurden announced Shell's plan to cut half of its carbon emissions by 2050, and 20 percent by 2035. In this regard, Shell promised to spend $2 billion annually on renewable energy sources. Shell began to develop its wind energy segment in 2001, the company now operates six wind farms in the United States and is part of a plan to build two offshore wind farms in the Netherlands.In December 2017, the company announced plans to buy UK household energy and broadband provider First Utility. In March 2019 it rebranded to Shell Energy and announced that all electricity would be supplied from renewable sources.In December 2018, the company announced that it had partnered with SkyNRG to begin supplying sustainable aviation fuel to airlines operating out of San Francisco Airport (SFO), including KLM, SAS, and Finnair. In the same month, the company announced plans to double its renewable energy budget to investment in low-carbon energy to $4 billion US each year, with an aim to spend up to $2 billion US on renewable energy by 2021.In January 2018, the company acquired a 44% interest in Silicon Ranch, a solar energy company run by Matt Kisber, as part of its global New Energies project. The company took over from Partners Group, paying up to an estimated $217 million for the minority interest.In February 2019, the company acquired German solar battery company Sonnen. It first invested in the company in May 2018 as part of its New Energies project. As of late 2021, the company had 800 employees and has installed 70.000 home battery systems.On 27 February 2019, the company acquired British VPP operator Limejump for an undisclosed amount.In July 2019, Shell installed their first 150 kW electric car chargers at its London petrol stations with payments handled via SMOOV. They also plan to provide 350 kW chargers in Europe by entering into an agreement with IONITY.On 26 January 2021, Shell said it would buy 100 per cent of Ubitricity, owner of the largest public charging network for electric vehicles in the United Kingdom, as the company expands its presence along the power supply chain.On 25 February 2021, Shell announced the acquisition of German Virtual Power Plant (VPP) company Next Kraftwerke for an undisclosed amount. Next Kraftwerke connects renewable electricity generation- and storage projects to optimize the usage of those assets. The company mostly operates in Europe.In November 2022, it was announced Shell's wholly-owned subsidiary, Shell Petroleum NV, had acquired the Odense-headquartered renewable natural gas producer, Nature Energy Biogas A/S for nearly $2 billion USD. Controversies General issues Shell's public rhetoric and pledges emphasize that the company is shifting towards climate-friendly, low-carbon and transition strategies. However, a 2022 study found that the company's spending on clean energy was insignificant and opaque, with little to suggest that the company's discourse matched its actions.In 1989, Shell redesigned a $3-billion natural gas platform in the North Sea, raising its height one to two meters, to accommodate an anticipated sea level rise due to global warming. In 2013, Royal Dutch Shell PLC reported CO2 emissions of 81 million metric tonnes.In 2017, Shell sold non-compliant foreign fuel to consumers.In 2020, the Northern Lights CCS project was announced, which is a joint project between Equinor, Shell and Total, operating in the European Union (Norway) and aiming to store liquid CO2 beneath the seabed.Environmentalists have expressed concern that Shell is processing oil from the Amazon region of South America. In the United States, the Martinez refinery (CA) and the Puget Sound Refinery (WA) carry Amazonian oil. In 2015, 14% of the Martinez refinery's gross, at 19,570 barrels per day, came from the Amazon.In 2021, Shell was ranked as the 10th most environmentally responsible company out of 120 oil, gas, and mining companies involved in resource extraction north of the Arctic Circle in the Arctic Environmental Responsibility Index (AERI).In December 2021, Royal Dutch Shell decided to move ahead with seismic tests to explore for oil in humpback whale breeding grounds along South Africa's eastern coastline. On 3 December 2021, a South African high court struck down an urgent application brought by environmentalists to stop the project, which will involve a vessel regularly firing an air gun that produces a very powerful shock wave underwater to help map subsea geology. According to Greenpeace Africa and the South African Deep Sea Angling Association, this could cause "irreparable harm" to the marine environment, especially to migrating humpback whales in the area. Climate change In 2017, a public information film ("Climate of Concern") unseen for years resurfaced and showed Shell had clear grasp of global warming 26 years earlier but has not acted accordingly since, said critics.The burning of the fossil fuels produced by Shell are responsible for 1.67% of global industrial greenhouse gas emissions from 1988 to 2015. In April 2020, Shell announced plans to achieve net zero greenhouse gas emissions by 2050 or sooner. However, internal documents from the company released by the Democratic-led House committee reveal a private 2020 communication saying Shell does not have any plans to bring emissions to zero for next 10–20 years.Measured by both its own emissions, and the emissions of all the fossil fuels it sells, Shell was the ninth-largest corporate producer of greenhouse gas emissions in the period 1988–2015. Climate case On 5 April 2019, Milieudefensie (Dutch for "environmental defense"), together with six NGOs and more than 17,000 citizens, sued Shell, accusing the company of harming the climate despite knowing about global warming since 1986. In May 2021, the district court of The Hague ruled that Shell must reduce carbon dioxide emissions by 45% by 2030 (compared to 2019 levels). Oil spills Shell was responsible for around 21,000 gallons of oil spilled near Tracy, California, in May 2016 due to a pipeline crack. Shell was responsible for an 88,200-gallon oil spill in the Gulf of Mexico in May 2016. Two ruptures in a Shell Oil Co. pipeline in Altamont, California – one in September 2015 and another in May 2016 – led to questions on whether the Office of the State Fire Marshal, charged with overseeing the pipeline, was doing an adequate job. On 29 January 2021, a Dutch court ordered Royal Dutch Shell plc's Nigerian unit to compensate for oil spills in two villages over 13 years ago. Shell Nigeria is liable for damages from pipeline leaks in the villages of Oruma and Goi, the Hague Court of Appeals said in a ruling. Shell said that it should not be liable, as the spills were the result of sabotage. Accusations of greenwashing On 2 September 2002, Shell Chairman Philip Watts accepted the "Greenwash Lifetime Achievement Award" from the Greenwash Academy's Oscar Green, near the World Summit on Sustainable Development.In 2007, British ASA ruled against a Shell ad involving chimneys spewing flowers, which depicted Shell's waste management policies, claiming it was misleading the public about Shell's environmental impact.In 2008, the British ASA ruled that Shell had misled the public in an advertisement when it claimed that a $10 billion oil sands project in Alberta, Canada, was a "sustainable energy source".In 2021, Netherlands officials told Shell to stop running a campaign which claimed customers could turn their fuel "carbon neutral" by buying offsets, as it was concluded that this claim was devoid of evidence.In December 2022, U.S. House Oversight and Reform Committee Chair Carolyn Maloney and U.S. House Oversight Environment Subcommittee Chair Ro Khanna sent a memorandum to all House Oversight and Reform Committee members summarizing additional findings from the Committee's investigation into the fossil fuel industry disinformation campaign to obscure the role of fossil fuels in causing global warming, and that upon reviewing internal company documents, accused Shell along with BP, Chevron Corporation, and ExxonMobil of greenwashing their Paris Agreement carbon neutrality pledges while continuing long-term investment in fossil fuel production and sales, for engaging in a campaign to promote the use of natural gas as a clean energy source and bridge fuel to renewable energy, and of intimidating journalists reporting about the companies' climate actions and of obstructing the Committee's investigation, which ExxonMobil, Shell, and the American Petroleum Institute denied. Health and safety A number of incidents over the years led to criticism of Shell's health and safety record, including repeated warnings by the UK Health and Safety Executive about the poor state of the company's North Sea platforms. Reaction to the War in Ukraine Shell already had previous experience exiting markets that were subject to sanctions pressure from NATO or EU member states. In particular, in 2013, Shell announced that it was suspending its operations in Syria. On 8 March 2022, Shell announced its intention to phase out all Russian hydrocarbon production and acquisition projects, including crude oil, petroleum products, natural gas and liquefied natural gas (LNG). In early 2022, the company was criticized by the Minister of Foreign Affairs of Ukraine for its slow response to the war in Ukraine. As of April 2023, Shell still had shares in Russian companies, such as 27.5% in Sakhalin Energy Investment Company (SEIC), a joint venture with Gazprom (50%), Mitsui (12.5%) and Mitsubishi (10%). royaldutchshellplc.com This domain name was first registered by a former marketing manager for Royal Dutch Shell plc, Alfred Donovan, and has been used as a "gripe site". It avoids being an illegal cybersquatter as long as it is non-commercial, active, and no attempt is made to sell the domain name, as determined by WIPO proceedings. In 2005, Donovan said he would relinquish the site to Shell after it "gets rid of all the management he deems responsible for its various recent woes." The site has been recognized by several media outlets for its role as an Internet leak. In 2008 the Financial Times published an article based on a letter published by royaldutchshellplc.com, which Reuters and The Times also covered shortly thereafter. On 18 October 2006, the site published an article stating that Shell had for some time been supplying information to the Russian government relating to Sakhalin II. The Russian energy company Gazprom subsequently obtained a 50% stake in the Sakhalin-II project. Other instances where the site has acted as an Internet leak include a 2007 IT outsourcing plan, as well as a 2008 internal memo where CEO Jeroen van der Veer expressed disappointment in the company's share-price performance.The gripe site has also been recognized as a source of information regarding Shell by several news sources. In the 2006 Fortune Global 500 rankings, in which Royal Dutch Shell placed third, royaldutchshellplc.com was listed alongside shell.com as a source of information. In 2007 the site was described as "a hub for activists and disgruntled former employees." A 2009 article called royaldutchshellplc.com "the world's most effective adversarial Web site." The site has been described as "an open wound for Shell." See also Notes References Bibliography Commissioned works Other works External links Official website Business data for Royal Dutch Shell plc: Shell plc companies grouped at OpenCorporates Works by Shell Union Oil Corporation at Project Gutenberg Works by or about Shell plc at Internet Archive Works by Shell plc at LibriVox (public domain audiobooks) Documents and clippings about Shell plc in the 20th Century Press Archives of the ZBW
regional greenhouse gas initiative
The Regional Greenhouse Gas Initiative (RGGI, pronounced "Reggie") is the first mandatory market-based program to reduce greenhouse gas emissions by the United States. RGGI is a cooperative effort among the states of Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Rhode Island, Vermont, and Virginia to cap and reduce carbon dioxide (CO2) emissions from the power sector. RGGI compliance obligations apply to fossil-fueled power plants 25 megawatts (MW) and larger within the 11-state region. Pennsylvania's participation in the RGGI cooperative was ruled unconstitutional on November 1, 2023. North Carolina's entrance into RGGI has been blocked by the enactment of the state's fiscal year 2023-25 budget.RGGI establishes a regional cap on the amount of CO2 pollution that power plants can emit by issuing a limited number of tradable CO2 allowances. Each allowance represents an authorization for a regulated power plant to emit one short ton of CO2. Individual CO2 budget trading programs in each RGGI state together create a regional market for CO2 allowances.The RGGI states distribute over 90 percent of allowances through quarterly auctions. These allowance auctions generate proceeds, which participating states are able to invest in strategic energy and consumer benefit programs. Programs funded through RGGI have included energy efficiency, clean and renewable energy, greenhouse gas abatement, and direct bill assistance. An initial milestone program's development occurred in 2005, when seven states signed a memorandum of understanding announcing an agreement to implement RGGI. The RGGI states then established individual CO2 budget trading programs, based on the RGGI Model Rule. The first pre-compliance RGGI auction took place in September 2008, and the program became effective on January 1, 2009. The RGGI program is currently in its fifth three-year compliance period, which began January 1, 2021. Track record and benefits RGGI states have reduced their carbon emissions while still experiencing economic growth. Power sector carbon emissions in the RGGI states have declined by over 50% since the program began. Media have reported on RGGI's success as a nationally relevant example showing that economic growth can coincide with pollution reductions. In a report on RGGI, the Congressional Research Service has also said that, "experiences in RGGI may be instructive for policymakers seeking to craft a national program."While multiple factors contribute to emissions trends, a 2015 peer-reviewed study found that RGGI has contributed significantly to the decline in emissions in the nine-state region. Alternate factors considered by the study included state Renewable Portfolio Standard (RPS) programs, economic trends, and natural gas prices. Other independent reports have analyzed RGGI's economic impact. For example, two reports by the Analysis Group studied RGGI's first and second three-year compliance periods. They found that the effects of RGGI's first three years are generating in $1.6 billion in net economic benefit and 16,000 job-years, and RGGI's second three years are generating $1.3 billion in net economic benefit and 14,700 job-years. These figures do not include co-benefits such as public health improvements or avoided climate change impacts. A Clean Air Task Force (CATF) study investigated public health benefits arising from the RGGI states' shift to cleaner power generation. The study found that the RGGI states transition to cleaner energy is saving hundreds of lives, preventing thousands of asthma attacks and reducing medical impacts and expenses by billions of dollars. Projected Benefits of RGGI in Pennsylvania Environmental RGGI has the potential to lower Pennsylvania's emissions of many pollutants dramatically. Carbon pollution would be reduced between 97 and 227 million tons Nitrogen Oxide would be reduced about 112,000 tons Sulfur Dioxide pollution would be reduced 67,000 tons Health The reduction in state air pollution can potentially result in significant health benefits through 2030. 639 fewer premature deaths 30,000 fewer respiratory-related hospital visits Economic The adoption of RGGI also has the potential to provide economic benefits through an increase in jobs, personal income, and $2 billion in gross state product through 2030. RGGI Market The highs and lows of the RGGI market can be largely attributed to declining emissions and allowance oversupply, price controls, policy intervention, and the Clean Power Plan of 2015 by the Obama Administration. RGGI has faced its fair share of obstacles, like any other emissions trading program; one of these is an oversupplied market. The oversupplied market related to RGGI can be traced back to the transition from coal to natural gas as well as a weak economy during the time of implementation. Because RGGI has a low price floor, there is no scarcity of allowances. In the world of carbon offset credits, allowances are shared through a cap-and-trade system to limit harmful emissions and catalyze pollution cuts. This cap-and-trade system is proving successful globally as countries are allowed to set more ambitious climate goals and countries across the world are seeing downward trends in emissions. Key Trends RGGI states have witnessed positive economic activity and a decrease in emissions, electricity prices, and coal generation. According to RGGI's 2018 Electricity Monitoring report, carbon dioxide emissions decreased by 48.3 percent between the 2006 to 2008 period and the 2016 to 2018 period. Electricity generation from coal decreased significantly since the inception of RGGI, while natural gas and renewable generation increased. RGGI also induces a downward pressure on wholesale electricity prices through investments in state-level energy efficiency programs. These programs, along with other associated RGGI measures, have helped provide positive economic impacts in the RGGI region—the RGGI program provided $1.4 billion in net positive economic activity between 2015 and 2017. RGGI caps The RGGI CO2 cap is the regional cap on power sector emissions. The RGGI states included two interim adjustments to the RGGI cap to account for banked CO2 allowances. The cap declined 2.5 percent each year until 2020. Initial reductions were planned as follows: The RGGI caps and adjusted caps decreased annually from 2014 to 2020, except in 2020 given the addition of New Jersey. In 2017, the participating states agreed to further reductions in the regional cap, specifying a 30 percent reduction from 2020 to 2030. The new caps for 2021-2030 are as follows: The RGGI states also established a Cost Containment Reserve (CCR) of CO2 allowances that creates a fixed additional supply of CO2 allowances that are only available for sale if CO2 allowance prices exceed certain price levels—$13.00 in 2021. In contrast to the CCR, there is an Emissions Containment Reserve (ECR), which serves as the floor and triggers a reduction in allowances if prices drop below the trigger price—$6.00 in 2021. The inclusion of the CCR and ECR ensures emission reduction costs are reasonable. Compliance RGGI compliance obligations apply to fossil-fueled power plants 25MW and larger within the RGGI region. As of 2021, there were 203 such covered sources.Under RGGI, sources are required to possess CO2 allowances equal to their CO2 emissions over a three-year control period. A CO2 allowance represents a limited authorization to emit one ton of CO2. The first three-year control period took effect on January 1, 2009, and extended through December 31, 2011. The second three-year control period took effect on January 1, 2012, and extended through December 31, 2014. The third three-year control period took effect on January 1, 2015, and extended through December 31, 2017. The fourth three-year control period took effect on January 1, 2018, and extended through December 31, 2020. The fifth three-year control period took effect on January 1, 2021, and extends through December 31, 2023.As of April 2021, 97.5 percent of regulated power plants had met their compliance obligations for the fourth control period. Quarterly regional auctions The first pre-compliance auction of RGGI CO2 allowances took place in September 2008. Regional auctions are held on a quarterly basis and are conducted using a sealed-bid, uniform price format. Since 2008, the RGGI states have held 54 auctions generating over $4.7 billion in proceeds. Auction clearing prices have ranged from $1.86 to $13.Any party can participate in the RGGI CO2 allowance auctions, provided they meet qualification requirements, including provision of financial security. Auction rules limit the number of CO2 allowances that associated entities may purchase in a single auction to 25 percent of the CO2 allowances offered for sale in that auction.The RGGI auctions are monitored by an independent market monitor, Potomac Economics. Potomac Economics monitors the RGGI allowance market in order to protect and foster competition, as well as to increase the confidence of participants and the public in the allowance market. The independent market monitor has found no evidence of anti-competitive conduct, and no material concerns regarding the auction process, barriers to participation in the auctions, competitiveness of the auction results, or the competitiveness of the secondary market for RGGI CO2 allowances.Market participants can also obtain CO2 allowances in secondary markets, such as the Intercontinental Exchange (ICE), or in over-the-counter transactions. The independent market monitor provides quarterly reports on the secondary market for RGGI allowances. Investment of auction proceeds The RGGI states have discretion over how they invest RGGI auction proceeds. They have reinvested proceeds, generated by RGGI auctions in a wide variety of programs. Programs funded through RGGI investment in energy efficiency, renewable energy, direct bill assistance, and greenhouse gas abatement have benefited more than 3.7 million participating households and 17,800 participating businesses. These investments have saved participants money on their energy bills, created jobs, and reduced pollution. In the period 2008 to 2014, programs funded by RGGI investments avoided the use of 2.4 TWh of electricity, 1.6 TWh (5.3×1012 British thermal units) of fossil fuel, and the release of 1.7×106 short tons (1.5×106 tonnes) of carbon dioxide. Over their lifetime, programs funded by RGGI investments estimate to avoid the use of 20.6 TWh of electricity, 22.3 TWh (76.1×1012 British thermal units) of fossil fuel, and the release of 15.4×106 short tons (1.40×107 tonnes) of carbon dioxide.Energy efficiency represents a large portion of RGGI investments. Ultimately, all electricity consumers, not only those who make upgrades, benefit from energy efficiency programs. For example, investing in efficiency programs - such as weatherizing houses - reduces the amount of electricity used. The decrease in electricity demand actually reduces the overall price of electricity. That means the costs go down for everyone, not just someone who installed new, efficient windows. Program review The RGGI participating states have committed to comprehensive, periodic program reviews to consider program successes, impacts, and design elements. The RGGI states are currently undergoing a 2021 Program Review, which includes technical analyses and regularly scheduled public stakeholder meetings to solicit input. The 2021 Review is expected to be completed in early 2023.The 2012 and 2016 RGGI Program Reviews completed in 2013 and 2017 resulted in several updates to the program. The 2012 Review led to a 45 percent reduction in the RGGI cap and the introduction of the CCR. The CCR and the reduced cap took effect in 2014. The 2016 Review established the ECR and an additional 30 percent reduction in the RGGI cap from 2020 to 2030. This review also included modifications to the CCR, offset categories, and the minimum reserve price. History On December 20, 2005, seven governors from Connecticut, Delaware, Maine, New Hampshire, New Jersey, New York, and Vermont signed a memorandum of understanding aimed at developing a cap-and-trade program for power sector CO2 emissions in the northeastern and mid-Atlantic region. The MOU established the initial framework for RGGI.The following year, in 2006, the same seven states amended the MOU and published the first Model Rule draft to guide individual state-level regulations. In 2007, Massachusetts, Maryland, and Rhode Island also signed on to the MOU. On December 31, 2008, the 10 MOU states finalized the first Model Rule, setting individual shares of the regional CO2 cap.RGGI's first compliance period began on January 1, 2009, and the first Model Rule served as the regulatory framework for each participating state until 2013. During this time, New Jersey withdrew from the MOU. Groups such as Acadia Center had reported on lost revenue resulting from New Jersey's departure, and argued for renewed participation. After the election of governor Phil Murphy in 2017, New Jersey began to make preliminary moves to rejoin RGGI. New Jersey reentered the RGGI under an executive order on January 29, 2018. Updated Model Rules were released in 2013 and 2017. Virginia Overview After the 2017 election of Governor Ralph Northam in Virginia, the state began to make preliminary moves to join RGGI. However, the move was stopped in 2019 when the Republican-controlled state legislature wrote a provision in the budget bill prohibiting the state from joining RGGI. The move to join RGGI was re-introduced as part of the 2020 General Assembly. With a Democratic majority in both the House of Delegates and the General Assembly, the measure passed and was signed into law. Virginia effectively joined RGGI on January 1, 2021. Withdrawal On January 15, 2022, after winning the Virginia 2021 gubernatorial election, on his first day in office Governor Glenn Youngkin signed Executive Order 9 calling for a reevaluation of Virginia's membership in RGGI It has been noted that because Virginia entered the initiative through legislative action, Youngkin may lack the legal authority to withdraw from the initiative without legislative approval. On December 7, 2022, the Virginia Air Pollution Control Board (APCB), voted 4-1 with two abstentions to initiate the repeal of state regulations governing its participation in RGGI. Shortly thereafter, the Joint Commission on Administrative Rules, a legislative oversight commission, voted 5-4 on December 20, 2022, objecting to the APCB's action initiating the withdrawal. The APCB took its final step on June 7, 2023, voting 4-3 to adopt the regulation and sent to the Governor's Office for publication in the Virginia Register. Litigation On August 21, 2023, the Southern Environmental Law Center on behalf of the Association of Energy Conservation Professionals, Virginia Interfaith Power and Light, Appalachian Voices, and Faith Alliance for Climate Solutions filed a lawsuit in the Fairfax Circuit Court challenging the APCB's authority to withdraw the state from RGGI. Governor Youngkin’s administration maintains that his office does have the power to remove Virginia from RGGI, and that the regional carbon market program is a “regressive tax” that burdens residents. The lawsuit alleges the Commonwealth's constitution was violated by the APCB who "suspended and ignored the execution of law and invaded the General Assembly’s legislative power.” The lawsuit also claims that since joining RGGI carbon dioxide emissions from Virginia power plants in Virginia have decreased by nearly 17%, from about 32.8 million short tons in 2020 to about 27.3 million short tons in 2022. Funds paid by Virginia power plants for their excess emissions, meanwhile, have paid for more than $328 million to help low- and moderate-income Virginians with ways to cut energy use and more than $295 million for flood control, a major issue in low-lying coastal and Chesapeake Bay communities. Pennsylvania Overview In October 2019, Pennsylvania Governor Tom Wolf issued Executive Order 2019-17 directing the Pennsylvania Department of Environmental Protection (DEP) to begin working on regulations to bring Pennsylvania into RGGI. In September 2020, Governor Wolf vetoed a bill (House Bill 2025) that would restrict his administration's ability to take part in RGGI without the input of state lawmakers. Wolf vetoed the bill because he believed that the imminent effects of climate change outweighs other issues. Wolf's decision was also heavily influenced by the economic and environmental benefits seen in other RGGI states. On July 13, 2021, Pennsylvania's Environmental Quality Board (EQB) voted (15-4) to adopt the rulemaking entitled "CO2 Budget Trading Program", otherwise known as RGGI. The Offices of General Counsel and of the Attorney General approved the rulemaking as to form and legality on July 26, 2021, and November 24, 2021, respectively. Further, the Independent Regulatory Review Commission (IRRC), which evaluates whether proposed rules align with public interest, had approved the rulemaking on September 1, 2021.Pennsylvania formally joined RGGI in April 2022, however, Pennsylvania remains unable to participate in the carbon credit auction due to two “separate but related” lawsuits, regarding the constitutionality of Pennsylvania's involvement in RGGI. Environmental advocates have calculated that Pennsylvania has missed out on a cumulative $1.5 billion in missed revenue. On November 1, 2023, the Commonwealth Court ruled that the Pennsylvania rulemaking that made Pennsylvania a RGGI participant was void because it constituted an unconstitutional tax imposed by the Pennsylvania Department of Environmental Protection and Environmental Quality Board. On November 21, 2023, Pennsylvania Governor Josh Shapiro announced his office would appeal the Commonwealth Court's decision. Legislation Opposing RGGI At the start of Pennsylvania's 2021-2022 Legislative Session a number of bills in opposition to RGGI were introduced. Notably, H.B. 637 and its Senate counterpart (S.B. 119) attempted to prohibit DEP from taking actions surrounding carbon pricing programs, including RGGI, without legislative approval. Both bills failed to garner enough support and expired at the end of the legislative session. A similar bill to the previously vetoed H.B. 2025 was reintroduced in the 2022-2023 Legislative Session and is pending before the House Environmental Resources & Energy Committee.Following the approval by IRRC on September 1, 2021, under Pennsylvania law, a standing committee of either (or both) the Pennsylvania House of Representatives or Pennsylvania Senate is able to, within 14 days, report for full consideration by the House or Senate a concurrent resolution disapproving the regulation at issue. In this case, the Senate Environmental Resources and Energy Committee reported Senate Concurrent Regulatory Review Resolution 1 (SCRRR1) disapproving the rulemaking on September 14, 2021. Once reported the House of Representatives and the Senate have 10 legislative days or 30 calendar days, whichever is longer, to adopt SCRRR1. The Senate approved SCRRR1 on October 27, 2021, within the 10 legislative day limitation. The House of Representatives, however, did not adopt SCRRR1 until December 15, 2021. Governor Wolf then vetoed the resolution on January 10, 2022. In response, on April 4, 2022, the Senate attempted to override the Governor's veto but failed (32-17), just one vote shy of the constitutional two-thirds requirement. Legislation Supporting RGGI In response to and in tandem with opposing legislation, two companion bills were introduced in the 2021–2022 Legislative Session, detailing the appropriation of RGGI auction proceeds to areas of need. Senate Bill 15 and H.B. 1565 were proposed and had Governor Wolf's support but failed to garner enough support and expired at the end of the session. No such bills have yet to be introduced in the 2023-2024 Legislative Session. Litigation Ziadeh v. Pennsylvania Legislative Reference Bureau On February 3, 2022, Patrick J. McDonnell, Secretary of the Department of Environmental Protection and Chairperson of the Environmental Quality Board, filed a lawsuit in the Commonwealth Court against the Pennsylvania Legislative Reference Bureau (LRB), its Director, and the Director of the Pennsylvania Code and Bulletin. Secretary McDonnell alleged that on November 29, 2021, DEP, acting on behalf of the EQB, submitted to the LRB for final publication in the Pennsylvania Bulletin the “CO2 Budget Trading Program Regulation”. The Director of the Pennsylvania Code and Bulletin acknowledged the submission of the rulemaking but refused to publish it because the period during which the House of Representatives and Senate had to disapprove the rulemaking had not yet expired. On December 10, 2021, Secretary McDonnell again submitted the rulemaking for publication, however, the Senate and House of Representatives in the intervening time had approved a resolution (SCRRR1) disapproving the rulemaking. Secretary McDonnell claimed that SCRRR1 was procedurally deficient because LRB’s interpretation of the 10 legislative days or 30 calendar days (whichever is longer) in which the House of Representatives and Senate must act to disapprove a regulation is incorrect. McDonnell claimed the timeframe for both chambers to act on a disapproval resolution runs concurrently rather than consecutively. In other words, both the House of Representatives and Senate must act within the 10 legislative days or 30 calendar days (whichever is longer), as opposed to LRB’s interpretation that each chamber has 10 legislative days or 30 calendar days to act. For example, the Senate could vote to approve the resolution on the 25th calendar day and transmit the resolution to the House of Representatives which would then have another 10 legislative days or 30 calendar days (i.e., the clock starts over). Under Secretary McDonnell's interpretation in such a scenario, the House of Representatives would only have 5 days to act on the resolution (the timeframe is concurrent for both chambers). Injunction Granted On April 5, 2022, the Commonwealth Court issued an order temporarily blocking publication of the RGGI rulemaking. State legislators, Pennsylvania Senate President Pro Tempore Jake Corman, Senate Majority Leader Kim Ward, Senate Environmental Resources & Energy Committee Chair Gene Yaw, and Senate Appropriations Committee Chair Pat Browne soon after intervened and requested a preliminary injunction barring publication. The stay was deemed dissolved as of April 11, 2022, and the RGGI rulemaking was finally published in the Pennsylvania Bulletin on April 23, 2022. On July 8, 2022, the Commonwealth Court granted the state Senators’ request for a preliminary injunction enjoining DEP from implementing, enforcing, participating, and administrating the RGGI program. The Court found that the Senators had demonstrated irreparable harm per se by raising a substantial legal question as to whether the regulations constituted a tax requiring legislative approval as opposed to a regulatory fee. The Court further found that implementation and enforcement of invalid regulations would cause great harm even if implementation of the regulations would result in “immediate reduction” of carbon dioxide emissions from covered sources. In addition, the Court found that the preliminary injunction would restore the status quo and that the Senators had showed a clear right to relief by raising substantial legal questions about separation of powers issues, as well as whether the allowance auction proceeds were an unconstitutional tax.However, the Court found that the Senators did not raise substantial legal questions regarding whether the regulation exceeded authority granted to DEP and EQB to promulgate such a rulemaking, whether the regulations constituted an interstate compact or agreement in violation of the Pennsylvania Constitution, or whether the administrative process through which the regulations were adopted was lawful. Appeal to Pennsylvania Supreme Court On July 11, 2022, Acting Secretary Ramez Ziadeh (Secretary McDonnell’s service with DEP ended July 1, 2022, and Acting Secretary Ziadeh was substituted as the petitioner) appealed the Commonwealth Courts July 8, 2022, preliminary injunction order to the Pennsylvania Supreme Court. In response, the Senators shortly thereafter moved to vacate the automatic stay of the Commonwealth Court’s July 8, 2022, order that was triggered by DEP's appealed. The Commonwealth Court granted the motion to vacate the automatic stay and the preliminary injunction remained in effect. On August 31, 2022, the Pennsylvania Supreme Court denied DEP’s request to reinstate the stay on the Commonwealth Court’s injunction on implementing the RGGI rulemaking. Case Dismissed as Moot On January 19, 2023, the Commonwealth Court dismissed DEP’s petition seeking to compel LRB to publish the RGGI rulemaking as moot. The Court noted that it was undisputed that the question of law raised by the petition was moot due to the subsequent publication of the rulemaking on April 23, 2022. The Court further found that no exception to the mootness exception applied. The Court said the case raised “remarkable” legal questions of first impression but that any judgement given would be advisory and have no effect. The Court said counterclaims by the Senators who intervened “remain extant” (i.e., the initial question of concurrent or consecutively timeframe remains unresolved). Bowfin KeyCon Holdings, LLC v. Pennsylvania Department of Environmental Protection On November 1, 2023, the Commonwealth Court declared that the Pennsylvania rulemaking that made Pennsylvania a RGGI participant was void because it constituted an unconstitutional tax imposed by the Pennsylvania Department of Environmental Protection and Environmental Quality Board. The Court found that it was undisputed that “significant monetary benefits” were anticipated from participation in the RGGI carbon dioxide allowance auctions; that there was no cited authority for the agencies to obtain or retain auction proceeds for allowances purchased by non-Pennsylvania covered sources, which are not subject to the agencies’ regulatory authority and “not tethered to CO2 emissions in Pennsylvania”; that only 6% of proceeds would be attributable to the costs of administering the program; and that the auction proceeds would exceed total funds appropriated to the agencies “by nearly threefold.” The Court found that participation in RGGI would thus generate moneys “grossly disproportionate” to oversight costs and annual regulatory needs and relate to activities beyond the agencies’ jurisdiction. The Court held that the regulations therefore were invalid and unenforceable. The Court said that RGGI participation “may only be achieved through legislation duly enacted by the Pennsylvania General Assembly.” Three judges did not participate in the case, and one judge dissented, writing that in her view there were genuine issues of material fact at this stage regarding whether the rulemaking established a fee or a tax. Appeal to Pennsylvania Supreme Court On November 21, 2023, the Pennsylvania Governor's Office announced his administration would appeal the Commonwealth Court's ruling, in a statement saying the Commonwealth Court's decision on RGGI were “limited to questions of executive authority, and our Administration must appeal in order to protect that important authority for this Administration and all future governors.” The statement urges the Pennsylvania General Assembly to take action, “Should legislative leaders choose to engage in constructive dialogue, the Governor is confident we can agree on a stronger alternative to RGGI,” the statement further said, “If they take their ball and go home, they will be making a choice not to advance commonsense energy policy that protects jobs, the environment, and consumers in Pennsylvania.” Should Governor Shapiro's administration win the appeal, it is unclear whether the governor will maintain the Commonwealth’s membership in RGGI. See also Climate Stewardship Bill The Climate Registry Western Regional Climate Action Initiative Midwestern Greenhouse Gas Reduction Accord Climate Change Action Plan 2001 List of climate change initiatives Regulation of greenhouse gases under the Clean Air Act References External links Regional Greenhouse Gas Initiative official website New England Governors/Eastern Canadian Premiers Climate Change Action Plan International Carbon Action Partnership The Climate Registry
environmental effects of bitcoin
The environmental effects of bitcoin are significant. Bitcoin mining, the process by which bitcoins are created and transactions are finalized, is energy-consuming and results in carbon emissions as about half of the electricity used is generated through fossil fuels. As of 2022, bitcoin mining is estimated to be responsible for 0.2% of world greenhouse gas emissions, and to represent 0.4% of global electricity consumption. Moreover, Bitcoins are mined on specialized computer hardware with a short useful life expectancy, resulting in electronic waste. The amount of electrical energy and e-waste generated by bitcoin mining is often compared with countries like Greece or the Netherlands. Greenhouse gas emissions Mining as an electricity-intensive process Bitcoin mining is a highly electricity-intensive proof-of-work process. Miners run bitcoin-mining software and compete against each other to be the first to win the current 10 minute block and therefore receive the block reward. The bitcoins are the said block reward. A transition to the more energy-efficient proof-of-stake has been described as a sustainable alternative to Bitcoin's proof-of-work and as a potential solution to its environmental issues.Bitcoin mining's distribution makes it difficult for researchers to identify miners's location and electricity use and therefore to translate energy consumption into carbon emissions. As of 2022, the Cambridge Centre for Alternative Finance (CCAF) estimates that bitcoin consumes 95.5 TW⋅h (344 PJ) annually, representing 0.4% of the world's electricity consumption, ranking bitcoin mining between Belgium and the Netherlands in terms of electricity consumption. Per a 2021 study published in Finance Research Letters, the differences in underlying assumptions and variation in the coverage of time periods and forecast horizons have led to bitcoin carbon footprint estimates spanning from 1.2–5.2 Mt CO2 to 130.50 Mt CO2 per year. According to a 2022 estimate published in Joule, bitcoin mining may result in annual carbon emission of 65 Mt CO2, representing 0.2% of global emissions, which is comparable to the level of emissions of Greece. Comparison to other payment systems One 2021 study by cryptocurrency investment firm Galaxy Digital claimed that bitcoin mining used less energy than the banking system as, contrary to banking, bitcoin mining's energy usage is not correlated with its transactional volume. The International Monetary Fund estimates in 2022 that the global payment system represented about 0.2% of global electricity consumption, comparable to the consumption of Portugal or Bangladesh. Citing the Galaxy Digital report, the authors note that the energy consumption of the entire banking sector is larger as banks offer more services than just payments.Energy used is estimated between 100 and 1,000 kilowatt-hours per transaction. However, Bitcoin's energy expenditure is not directly linked to the number of transactions and this estimate does not reflect the energy efficiencies from layer 2 solutions, like the Lightning Network, and batching, which allow Bitcoin to process more payments than the number of on-chain transactions suggests. For instance, as of 2022, Bitcoin processes 100 million transactions per year, representing 250 million payments. Still, the IMF notes that comparisons remain valid as credit card transactions only use 0.001–0.01 kWh.In September 2022, a study in Scientific Reports found that from 2016 to 2021, each US dollar worth of mined bitcoin market value caused 35 cents worth of climate damage. This is comparable to the beef industry which causes 33 cents per dollar, and the gasoline industry which causes 41 cents per dollar. Compared to gold mining, "Bitcoin's climate damage share is nearly an order of magnitude higher" according to study co-author economist Andrew Goodkind. Bitcoin mining energy mix Until 2021, most bitcoin mining was done in China. Chinese miners relied on cheap coal power in Xinjiang and Inner Mongolia in late autumn, winter and spring, and then migrated to regions with overcapacities in low-cost hydropower, like Sichuan and Yunnan, between May and October. In June 2021, China banned bitcoin mining and miners moved to other countries. By August 2021, mining was concentrated in the U.S. (35%), Kazakhstan (18%), and Russia (11%) instead. The shift from coal resources in China to coal resources in Kazakhstan increased Bitcoin's carbon footprint as Kazakhstani coal plants use hard coal, which has the highest carbon content of all coal types. Despite the ban, covert mining operations gradually came back to China, reaching 21% of global hashrate as of 2022.Reducing the environmental impact of bitcoin is possible by mining only using clean electricity sources. As of 2021, according to The New York Times, bitcoin's use of renewables ranged from 40% to 75%. As of 2023, according to Bloomberg Intelligence, renewables represent about half of global bitcoin mining sources. Still, experts and government authorities, such as the Swedish Financial Supervisory Authority, the European Securities and Markets Authority and the European Central Bank, have suggested that the use of renewable energy for mining may limit the availability of clean energy for ordinary uses by the general population.The development of intermittent renewable energy sources, such as wind power and solar power, is challenging because they cause instability in the electrical grid. Several papers concluded that these renewable power stations could use the surplus energy to mine bitcoin and thereby reduce curtailment, hedge electricity price risk, stabilize the grid, increase the profitability of renewable energy infrastructure, and therefore accelerate transition to sustainable energy and decrease bitcoin's carbon footprint. According to a 2023 review published in Resource and Energy Economics, Bitcoin mining can indeed increase renewable capacity but it may increase carbon emissions. However, using Bitcoin to provide demand response largely mitigates its environmental impact. Conversely, Bitcoin mining may also incentivize the reopening of abandoned fossil fuels plants. For instance, Greenidge Generation, a closed coal-fired power plant in New York State, was converted into natural gas to mine Bitcoin. Such impact is difficult to quantify. Methane emissions Bitcoin has been mined via electricity generated through the combustion of associated petroleum gas (APG), which is a methane-rich byproduct of crude oil drilling that is sometimes flared or released into the atmosphere. Methane is a greenhouse gas with a global warming potential 28 to 36 times greater than CO2. By converting more of the methane to CO2 than flaring alone would, using APG generators reduces the APG's contribution to the greenhouse effect, but this practice is still harmful to the environment. In places where flaring is prohibited (such as Colorado) this practice has allowed more oil drills to operate by offsetting costs, which further delays fossil fuel phase-out. This process also allows oil companies such as ExxonMobil to report lower emissions by selling gas leaks, shifting responsibility to buyers and avoiding a real reduction commitment. Electronic waste Bitcoins are usually mined on specialized computing hardware, called application-specific integrated circuits, with no alternative use beyond bitcoin mining. Due to the consistent increase of the bitcoin network's hashrate, mining devices are estimated to have an average lifespan of 1.3 years until they become unprofitable and need to be replaced, resulting in significant electronic waste. As of 2021, Bitcoin's annual e-waste was estimated to be over 30,000 tonnes, which is comparable to the small IT equipment waste produced by the Netherlands, while each Bitcoin transaction was estimated to result in 272 g (9.6 oz) of e-waste. Responses In March 2022, President Joe Biden signed an executive order calling for the Environmental Protection Agency (EPA) to compile a report that "address[es] the effect of crypto-assets' consensus mechanisms on energy usage, including mitigating measures, alternative consensus mechanisms, and design tradeoffs." In September 2022, a report of the US Office of Science and Technology Policy highlighted the need for increased transparency about electricity usage, greenhouse gas emissions, and e-waste. In November 2022, the EPA confirmed being working on the climate impacts of cryptocurrency mining. Some US states, such as Kentucky, Montana, Texas, and Wyoming encourage Bitcoin mining with tax breaks while New York State banned new fossil fuel crypto mining plants with a two-year moratorium, citing environmental concerns.Per a 2021 study in Finance Research Letters, "climate-related criticism of bitcoin is primarily based on the network's absolute carbon emissions, without considering its market value." It argues that the inclusion of bitcoin in an equity portfolio reduces that portfolio's "aggregate carbon emissions". References Sources Agur, Itai; Deodoro, Jose; Lavayssière, Xavier; Martinez Peria, Soledad; Sandri, Damiano; Tourpe, Hervé; Villegas Bauer, German (2022). Digital Currencies and Energy Consumption. International Monetary Fund. ISBN 979-8-4002-0824-9. de Vries, Alex; Gallersdörfer, Ulrich; Klaaßen, Lena; Stoll, Christian (16 March 2022). "Revisiting Bitcoin's carbon footprint". Joule. 6 (3): 498–502. doi:10.1016/j.joule.2022.02.005. ISSN 2542-4351. S2CID 247143939.
post–kyoto protocol negotiations on greenhouse gas emissions
Post-Kyoto negotiations refers to high level talks attempting to address global warming by limiting greenhouse gas emissions. Generally part of the United Nations Framework Convention on Climate Change (UNFCCC), these talks concern the period after the first "commitment period" of the Kyoto Protocol, which expired at the end of 2012. Negotiations have been mandated by the adoption of the Bali Road Map and Decision 1/CP.13 ("The Bali Action Plan"). UNFCCC negotiations are conducted within two subsidiary bodies, the Ad Hoc Working Group on Long-term Cooperative Action under the Convention (AWG-LCA) and the Ad Hoc Working Group on Further Commitments for Annex I Parties under the Kyoto Protocol (AWG-KP) and were expected to culminate in the United Nations Climate Change Conference taking place in December 2009 in Copenhagen (COP-15); negotiations are supported by a number of external processes, including the G8 process, a number of regional meetings and the Major Economies Forum on Energy and Climate that was launched by US President Barack Obama in March 2009. High level talks were held at the meeting of the G8+5 Climate Change Dialogue in February 2007 and at a number of subsequent G8 meetings, most recently leading to the adoption of the G8 leaders declaration "Responsible Leadership for a Sustainable Future" during the G8 summit in L´Aquila, Italy, in July 2009. February 2007 Washington Declaration In the non-binding "Washington Declaration" on February 16, 2007, the G8+5 group of leaders agreed in principle to a global cap-and-trade system that would apply to both industrialized nations and developing countries, which they hoped would be in place by 2009. Official G8+5 Climate Change Dialogue Web site 33rd G8 summit On June 7, 2007, leaders at the 33rd G8 summit issued a non-binding communiqué announcing that the G8 nations would "aim to at least halve global CO2 emissions by 2050". The details enabling this to be achieved would be negotiated by environment ministers within the United Nations Framework Convention on Climate Change in a process that would also include the major emerging economies. Groups of countries would also be able to reach additional agreements on achieving the goal outside and in parallel with the United Nations process. The G8 also announced their desire to use the proceeds from the auction of emission rights and other financial tools to support climate protection projects in developing countries.The agreement was welcomed by British Prime Minister Tony Blair as "a major, major step forward". French president Nicolas Sarkozy would have preferred a binding figure for emissions reduction to have been set. This was apparently blocked by U.S. President George W. Bush until the other major greenhouse gas emitting countries, like India and China, make similar commitments. Official G8 Web site 2007 UN General Assembly plenary debate As part of the schedule leading up to the September UN High-Level-Event, on July 31 the United Nations General Assembly opened its first-ever plenary session devoted exclusively to climate change, which also included prominent scientists and business leaders. The debate, at which nearly 100 nations spoke, was scheduled to last two days but was extended for a further day to allow a greater number of "worried nations" to describe their climate-related problems.In his opening speech, Secretary-General Ban Ki-moon urged Member States to work together, stating that the time had come for "decisive action on a global scale", and called for a "comprehensive agreement under the United Nations Framework Convention on Climate Change process that tackles climate change on all fronts, including adaptation, mitigation, clean technologies, deforestation and resource mobilization". In closing the conference General Assembly President Haya Rashed Al-Khalifa called for an "equitable, fair and ambitious global deal to match the scale of the challenges ahead". She had earlier stressed the urgency of the situation, stating that "the longer we wait, the more expensive this will be".The day after the session ended, the UN launched its new climate change web site detailing its activities relating to global warming. Official UN web site 2007 Vienna Climate Change Talks and Agreement A round of climate change talks under the auspices of the United Nations Framework Convention on Climate Change (UNFCCC) concluded in Austria on 31 August 2007 with agreement on key elements for an effective international response to climate change.A key feature of the talks was a United Nations report that showed how energy efficiency could yield significant cuts in emissions at low cost. The talks set the stage for the 2007 United Nations Climate Change Conference held in Bali in December 2007. September 2007 United Nations High-Level-Event As well as the meeting of the United Nations General Assembly, Secretary-General Ban Ki-moon was to hold informal high-level discussions on the post-Kyoto treaty on September 24. It was expected that these would pave the way for the United Nations Climate Change Conference, held in Bali in December 2007. Three Special Envoys on Climate Change, appointed on May 1, 2007, held discussions with various governments to define and plan the event.In advance of the "High-Level-Event", the Secretary-General hoped that world leaders would "send a powerful political signal to the negotiations in Bali that "business as usual" will not do and that they are ready to work jointly with others towards a comprehensive multilateral framework for action". Official UN Climate Change website September 2007 Washington conference It emerged on August 3, 2007, that representatives of the United Nations, major industrialized and developing countries are being invited by George Bush to a conference in Washington on September 27 and 28. Countries invited are believed to include the members of the G8+5 (Canada, France, Germany, Italy, Japan, Russia, United Kingdom, United States, Brazil, China, India, Mexico and South Africa), together with South Korea, Australia, Indonesia and South Africa. The meeting is to be hosted by US Secretary of State Condoleezza Rice, and is envisaged as the first of several extending into 2008. Initial reaction to the news of the conference invitation was mixed. 2007 United Nations Climate Change Conference in Bali Negotiations on a successor to the Kyoto Protocol dominated the 2007 United Nations Climate Change Conference. A meeting of environment ministers and experts held in June called on the conference to agree a road-map, timetable and "concrete steps for the negotiations" with a view to reaching an agreement by 2009.The conference ended with an all-night session of hard bargaining over words and their meaning. 2008 United Nations Climate Change Conference in Poznań Following preliminary talks in Bangkok, Bonn, and Accra, the 2008 negotiations culminated in December with the 2008 United Nations Climate Change Conference in Poznań, Poland. September 2009 United Nations Secretary General's Summit on Climate Change United Nations Secretary General Ban Ki-Moon convened a high-level event on Climate Change on 22 September 2009 to which Heads of State and Government have been invited. This event was intended to build further political momentum for an ambitious Copenhagen agreed outcome to be adopted at COP-15. 2009 United Nations Climate Change Conference in Copenhagen (COP-15) Following preparatory talks in Bonn (in Germany), Bangkok and Barcelona, the 2009 conference was held in December 2009 in Copenhagen, Denmark, and the treaty succeeding the Kyoto Protocol had been expected to be adopted there.Some media sources claimed beforehand that the meeting would lead to empty promises without measurable goals. In a meeting of the Group of Eight G8, the world top leaders agreed to halve carbon emissions by 2050; however, they did not set specific targets because they did not agree on a base year. However members of the climate council acknowledged that action needs to happen fast. "My personal view is that the future of humanity is at stake," said Tim Flannery, Professor at Macquarie University and chairman of the Copenhagen Climate Council, in an interview with chinadialogue.net.At the Conference, delegates approved a motion to "take note of the Copenhagen Accord of December 18, 2009". The motion was not unanimous, therefore it is not considered to be legally binding. The UN Secretary General Ban Ki-moon welcomed the US-backed climate deal as an "essential beginning", although it subsequently emerged that the US had 'used spying, threats and promises of aid' to gain support for the Accord, under which its emissions pledge is the lowest by any leading nation.The Copenhagen Accord recognises the scientific case for keeping temperature rises below 2 °C, but does not contain commitments for reduced emissions that would be necessary to achieve that aim, let alone 1.5 °C. One part of the agreement pledges US$ 30 billion to the developing world over the next three years, rising to US$100 billion per year by 2020, to help poor countries adapt to climate change. Earlier proposals, that would have aimed to limit temperature rises to 1.5 °C and cut CO2 emissions by 80% by 2050 were dropped. An agreement was also reached that would set up a deal to reduce deforestation in return for cash from developed countries. 2011 United Nations Climate Change Conference The 2011 United Nations Climate Change Conference was held in Durban, South Africa, from 28 November to 12 December 2011 to establish a new treaty to limit carbon emissions. The president of the conference was Maite Nkoana-Mashabane. The conference agreed to a legally binding deal comprising all countries, which will be prepared by 2015, and to take effect in 2020. 2012 United Nations Climate Change Conference The 2012 United Nations Climate Change Conference was held in Qatar from 26 November to 7 December 2012. Just before the conference, New Zealand announced it would not be continuing to take part in the Kyoto Protocol. New Zealand's climate minister Tim Groser said the 15-year-old agreement was outdated, and that New Zealand was "ahead of the curve" in looking for a replacement that would include developing nations. The conference reached an agreement to extend the life of the Kyoto Protocol until 2020, and to reify the 2011 Durban Platform, meaning that a successor to the Protocol is set to be developed by 2015 and implemented by 2020. 2013 United Nations Climate Change Conference The 2013 United Nations Climate Change Conference was the 19th yearly session of the Conference of the Parties (COP) to the 1992 United Nations Framework Convention on Climate Change (UNFCCC) and the 9th session of the Meeting of the Parties (CMP) to the 1997 Kyoto Protocol (the protocol having been developed under the UNFCCC's charter). The conference was held in Warsaw, Poland from 11 to 22 November 2013. Climate Summit 2014 On 23 September 2014, the UN Climate Summit 2014 was held. India, Russia, Canada and Australia (all of whom are on the top 15 of the countries with the most GHG emissions) did not attend the meeting. 125 other countries did attend. France promised to deposit 750 million into the UN climate fund. Perhaps the biggest announcement came from outside the Climate Summit, and was done by the Rockefeller Brothers Fund. They announced to withdraw from investing in the fossil fuel industry, more specifically from coal and tar sands. According to Arabella Advisors, 50 billion USD was withdrawn from this industry. It hence marks the beginning of private investors and large companies withdrawing from polluting industries, at a time when the political motivation for reducing GHG emissions is starting to stall. See also Action for Climate Empowerment Action on climate change Avoiding dangerous climate change Carbon capture and storage Convention on Biological Diversity List of countries by carbon dioxide emissions List of countries by carbon dioxide emissions per capita List of countries by greenhouse gas emissions per capita Low-carbon and post-carbon economies Paris Agreement, the post-Kyoto agreement Plug-in hybrid Technology transfer Tragedy of the commons Greenhouse Development Rights References External links United Nations Climate Change web site Climate Policy after the Bali Summit, Allianz Knowledge Site, January 2008 "Bali Dancing", The Walrus article on Canada's much-criticized failure to uphold Kyoto, 2008 Policy options An International Policy Architecture for the Post-Kyoto Era, S Olmstead & R Stavins, AEI-Brookings Joint Center for Regulatory Studies, 2006 Governing Climate: The Struggle For A Global Framework Beyond Kyoto, Taishi Sugiyama (editor), International Institute for Sustainable Development, 2005 Imagining a Post-Kyoto Climate Regime, Prof. Adil Najam, Fletcher School of Law and Diplomacy, 2005 An analysis of a post-Kyoto climate policy model, K Anderson & A Bows, Tyndall Centre for Climate Change Research, 2005 International Climate Efforts Beyond 2012, Center for Climate and Energy Solutions, 2005
bp
BP p.l.c. (formerly The British Petroleum Company p.l.c and BP Amoco p.l.c, stylised bp), is a British multinational oil and gas company headquartered in London, England. It is one of the oil and gas "supermajors" and one of the world's largest companies measured by revenues and profits. It is a vertically integrated company operating in all areas of the oil and gas industry, including exploration and extraction, refining, distribution and marketing, power generation, and trading. BP's origins date back to the founding of the Anglo-Persian Oil Company in 1909, established as a subsidiary of Burmah Oil Company to exploit oil discoveries in Iran. In 1935, it became the Anglo-Iranian Oil Company and in 1954, adopted the name British Petroleum. In 1959, the company expanded beyond the Middle East to Alaska. British Petroleum acquired majority control of Standard Oil of Ohio in 1978. Formerly majority state-owned, the British government privatised the company in stages between 1979 and 1987. British Petroleum merged with Amoco in 1998, becoming BP Amoco plc, and acquired ARCO and Burmah Castrol in 2000 and Aral AG in 2002. The company's name was shortened to BP p.l.c. in 2001. From 2003 to 2013, BP was a partner in the TNK-BP joint venture in Russia, and as of December 2022 holds a nearly 20% stake in Rosneft - accounting for a third of BP's total production. BP had earlier promised to divest the Rosneft holdings but was unable to find a buyer; instead BP wrote the assets off their books in a $25 billion non-cash charge.As of 31 December 2018, BP had operations in nearly 80 countries, produced around 3.7 million barrels per day (590,000 m3/d) of oil equivalent, and had total proven reserves of 19.945 billion barrels (3.1710×109 m3) of oil equivalent. The company has around 18,700 service stations worldwide, which it operates under the BP brand (worldwide) and under the Amoco brand (in the United States) and the Aral brand (in Germany). Its largest division is BP America in the United States. BP is the fourth-largest investor-owned oil company in the world by 2021 revenues (after ExxonMobil, Shell, and TotalEnergies). BP had a market capitalisation of US$98.36 billion as of 15 September 2022, placing it 122nd in the world, and its Fortune Global 500 rank was 35th in 2022 with revenues of US$164.2 billion. The company's primary stock listing in on the London Stock Exchange, where it is a member of the FTSE 100 Index. From 1988 to 2015, BP was responsible for 1.53% of global industrial greenhouse gas emissions and has been directly involved in several major environmental and safety incidents. Among them were the 2005 Texas City Refinery explosion, which caused the death of 15 workers and which resulted in a record-setting OSHA fine; Britain's largest oil spill, the wreck of Torrey Canyon in 1967; and the 2006 Prudhoe Bay oil spill, the largest oil spill on Alaska's North Slope, which resulted in a US$25 million civil penalty, the largest per-barrel penalty at that time for an oil spill. BP's worst environmental catastrophe was the 2010 Deepwater Horizon oil spill, the largest accidental release of oil into marine waters in history, which leaked about 4.9 million barrels (210 million US gal; 780,000 m3) of oil, causing severe environmental, human health, and economic consequences and serious legal and public relations repercussions for BP, costing more than $4.5 billion in fines and penalties, and an additional $18.7 billion in Clean Water Act-related penalties and other claims, the largest criminal resolution in US history. Altogether, the oil spill cost the company more than $65 billion. History 1909 to 1954 In May 1908, a group of British geologists discovered a large amount of oil at Masjed Soleyman located in the Khuzestan Province in the southwest of Persia (Iran). It was the first commercially significant find of oil in the Middle East. William Knox D'Arcy, by contract with Ali-Qoli Khan Bakhtiari, obtained permission to explore for oil for the first time in the Middle East, an event which changed the history of the entire region. The oil discovery led to petrochemical industry development and also the establishment of industries that strongly depended on oil. On 14 April 1909, the Anglo-Persian Oil Company (APOC) was incorporated as a subsidiary of Burmah Oil Company. Some of the shares were sold to the public. The first chairman and minority shareholder of the company became Lord Strathcona.Immediately after establishing the company, the British government asked Percy Cox, British resident to Bushehr, to negotiate an agreement with Sheikh Khaz'al Ibn Jabir of Arabistan for APOC to obtain a site on Abadan Island for a refinery, depot, storage tanks, and other operations. The refinery was built and began operating in 1912. In 1914, the British government acquired a controlling interest (50.0025%) in the company, at the urging of Winston Churchill, the then First Lord of the Admiralty, and the British navy quickly switched from coal to oil for the majority of their war ships. APOC also signed a 30-year contract with the British Admiralty for supplying oil for the Royal Navy at the fixed price. In 1915, APOC established its shipping subsidiary the British Tanker Company and in 1916, it acquired the British Petroleum Company which was a marketing arm of the German Europäische Petroleum Union in Britain. In 1919, the company became a shale-oil producer by establishing a subsidiary named Scottish Oils which merged remaining Scottish oil-shale industries.After World War I, APOC started marketing its products in Continental Europe and acquired stakes in the local marketing companies in several European countries. Refineries were built in Llandarcy in Wales (the first refinery in the United Kingdom) and Grangemouth in Scotland. It also acquired the controlling stake in the Courchelettes refinery in France and formed, in conjunction with the Government of Australia, a partnership named Commonwealth Oil Refineries, which built the Australian's first refinery in Laverton, Victoria. In 1923, Burmah employed Winston Churchill as a paid consultant to lobby the British government to allow APOC have exclusive rights to Persian oil resources, which were subsequently granted by the Iranian monarchy.APOC and the Armenian businessman Calouste Gulbenkian were the driving forces behind the creation of Turkish Petroleum Company (TPC) in 1912, to explore oil in Mesopotamia (now Iraq); and by 1914, APOC held 50% of TPC shares. In 1925, TPC received concession in the Mesopotamian oil resources from the Iraqi government under British mandate. TPC finally struck oil in Iraq on 14 October 1927. By 1928, the APOC's shareholding in TPC, which by now was named Iraq Petroleum Company (IPC), was reduced to 23.75%; as the result of the changing geopolitics post Ottoman empire break-up, and the Red Line Agreement. Relations were generally cordial between the pro-west Hashemite Monarchy (1932–58) in Iraq and IPC, in spite of disputes centred on Iraq's wish for greater involvement and more royalties. During the 1928–68 time period, IPC monopolised oil exploration inside the Red Line; excluding Saudi Arabia and Bahrain.In 1927, Burmah Oil and Royal Dutch Shell formed the joint marketing company Burmah-Shell. In 1928, APOC and Shell formed the Consolidated Petroleum Company for sale and marketing in Cyprus, South Africa and Ceylon, which in 1932 followed by a joint marketing company Shell-Mex and BP in the United Kingdom. In 1937, AIOC and Shell formed the Shell/D'Arcy Exploration Partners partnership to explore for oil in Nigeria. The partnership was equally owned but operated by Shell. It was later replaced by Shell-D'Arcy Petroleum Development Company and Shell-BP Petroleum Development Company (now Shell Petroleum Development Company).In 1934, APOC and Gulf Oil founded the Kuwait Oil Company as an equally owned partnership. The oil concession rights were awarded to the company on 23 December 1934 and the company started drilling operations in 1936. In 1935, Rezā Shāh requested the international community to refer to Persia as 'Iran', which was reflected in the name change of APOC to the Anglo-Iranian Oil Company (AIOC).In 1937, Iraq Petroleum Company, 23.75% owned by BP, signed an oil concession agreement with the Sultan of Muscat that covers the entire region of the Sultanate, which was in fact limited to the coastal area of present-day Oman. After several years of failure to discover oil in the Sultanate's region, IPC presumed that oil was more likely to be found in the interior region of Oman, which was part of the Imamate of Oman. IPC offered financial support to raise an armed force that would assist the Sultanate in occupying the interior region of Oman. Later, in 1954, the Sultan of Muscat, backed by the British government and the financial aid he received from IPC, started occupying regions within the interior of Oman, which led to the outbreak of Jebel Akhdar War that lasted for more than 5 years.In 1947, British Petroleum Chemicals was incorporated as a joint venture of AIOC and The Distillers Company. In 1956, the company was renamed British Hydrocarbon Chemicals.Following World War II, nationalistic sentiments were on the rise in the Middle East; most notable being Iranian nationalism, and Arab Nationalism. In Iran, the AIOC and the pro-western Iranian government led by Prime Minister Ali Razmara resisted nationalist calls to revise AIOC's concession terms in Iran's favour. In March 1951, Razmara was assassinated and Mohammed Mossadeq, a nationalist, was elected as the new prime minister by the Majlis of Iran (parliament). In April 1951, the Iranian government nationalised the Iranian oil industry by unanimous vote, and the National Iranian Oil Company (NIOC) was formed, displacing the AIOC. The AIOC withdrew its management from Iran, and Britain organised an effective worldwide embargo of Iranian oil. The British government, which owned the AIOC, contested the nationalisation at the International Court of Justice at The Hague, but its complaint was dismissed.Prime Minister Churchill asked President Eisenhower for help in overthrowing Mossadeq. The anti-Mossadeq plan was orchestrated under the code-name 'Operation Ajax' by CIA, and 'Operation Boot' by SIS (MI6). The CIA and the British helped stage a coup in August 1953, the 1953 Iranian coup d'état, which established pro-Western general Fazlollah Zahedi as the new PM, and greatly strengthened the political power of Shah Mohammad Reza Pahlavi. The AIOC was able to return to Iran. 1954 to 1979 In 1954, the AIOC became the British Petroleum Company. After the 1953 Iranian coup d'état, Iranian Oil Participants Ltd (IOP), a holding company, was founded in October 1954, in London to bring Iranian oil back to the international market. British Petroleum was a founding member of this company with 40% stake. IOP operated and managed oil facilities in Iran on behalf of NIOC. Similar to the Saudi-Aramco "50/50" agreement of 1950, the consortium agreed to share profits on a 50–50 basis with Iran, "but not to open its books to Iranian auditors or to allow Iranians onto its board of directors."In 1953, British Petroleum entered the Canadian market through the purchase of a minority stake in Calgary-based Triad Oil Company, and expanded further to Alaska in 1959, resulting discovery of oil at Prudhoe Bay in 1969. In 1956, its subsidiary D'Arcy Exploration Co. (Africa) Ltd. has been granted four oil concessions in Libya. In 1962, Scottish Oils ceased oil-shale operations. In 1965, it was the first company to strike oil in the North Sea. In 1969, BP entered the United States by acquiring the East Coast refining and marketing assets of Sinclair Oil Corporation. The Canadian holding company of British Petroleum was renamed BP Canada in 1969; and in 1971, it acquired 97.8% stake of Supertest Petroleum.By the 1960s, British Petroleum had developed a reputation for taking on the riskiest ventures. It earned the company massive profits; it also earned them the worst safety record in the industry. In 1967, the giant oil tanker Torrey Canyon foundered off the English coast. Over 32 million US gallons (760,000 bbl; 120,000 m3) of crude oil was spilled into the Atlantic and onto the beaches of Cornwall and Brittany, causing Britain's worst-ever oil spill. The ship was owned by the Bahamas-based Barracuda Tanker Corporation and was flying the flag of Liberia, a well-known flag of convenience, but was being chartered by British Petroleum. The ship was bombed by RAF jet bombers in an effort to break up the ship and burn off the leaking oil, but this failed to destroy the oil slick.In 1967, BP acquired chemical and plastics assets of The Distillers Company which were merged with British Hydrocarbon Chemicals to form BP Chemicals.The company's oil assets were nationalised in Libya in 1971, in Kuwait in 1975, and in Nigeria in 1979. In Iraq, IPC ceased its operations after it was nationalised by the Ba'athist Iraqi government in June 1972, although legally Iraq Petroleum Company still remains in existence but as a dormant company, and one of its associated companies —Abu Dhabi Petroleum Company (ADPC), formerly Petroleum Development (Trucial Coast) Ltd – also continues with the original shareholding intact.The intensified power struggle between oil companies and host governments in Middle East, along with the oil price shocks that followed the 1973 oil crisis meant British Petroleum lost most of its direct access to crude oil supplies produced in countries that belonged to the Organization of Petroleum Exporting Countries (OPEC), and prompted it to diversify its operations beyond the heavily Middle East dependent oil production. In 1976, BP and Shell de-merged their marketing operations in the United Kingdom by dividing Shell-Mex and BP. In 1978, the company acquired a controlling interest in Standard Oil of Ohio (Sohio).In Iran, British Petroleum continued to operate until the Islamic Revolution in 1979. The new regime of Ayatollah Khomeini nationalised all of the company's assets in Iran without compensation: as a result, BP lost 40% of its global crude oil supplies.In 1970–1980s, BP diversified into coal, minerals and nutrition businesses which all were divested later. 1979 to 1997 The British government sold 80 million shares of BP at $7.58 in 1979, as part of Thatcher-era privatisation. This sale represented slightly more than 5% of BP's total shares and reduced the government's ownership of the company to 46%. On 19 October 1987, Prime Minister Margaret Thatcher authorised the sale of an additional GBP7.5 billion ($12.2 billion) of BP shares at 333 pence, representing the government's remaining 31% stake in the company.In November 1987, the Kuwait Investment Office purchased a 10.06% interest in BP, becoming the largest institutional shareholder. The following May, the KIO purchased additional shares, bringing their ownership to 21.6%. This raised concerns within BP that operations in the United States, BP's primary country of operations, would suffer. In October 1988, the British Department of Trade and Industry required the KIO to reduce its shares to 9.6% within 12 months.Peter Walters was the company chairman from 1981 to 1990. During his period as chairman he reduced the company's refining capacity in Europe. In 1982, the downstream assets of BP Canada were sold to Petro Canada. In 1984, Standard Oil of California was renamed the Chevron Corporation; it bought Gulf Oil—the largest merger in history at that time. To meet anti-trust regulations, Chevron divested many of Gulf's operating subsidiaries, and sold some Gulf stations and a refinery in the eastern United States to British Petroleum and Cumberland Farms in 1985. In 1987, British Petroleum negotiated the acquisition of Britoil and the remaining publicly traded shares of Standard Oil of Ohio. At the same year it was listed on the Tokyo Stock Exchange where its share were traded until delisting in 2008.Walters was replaced by Robert Horton in 1990. Horton carried out a major corporate downsizing exercise, removing various tiers of management at the head office. In 1992, British Petroleum sold off its 57% stake in BP Canada (upstream operations), which was renamed as Talisman Energy. John Browne, who had joined BP in 1966 and rose through the ranks to join the board as managing director in 1991, was appointed group chief executive in 1995.In 1981, British Petroleum entered into the solar technology sector by acquiring 50% of Lucas Energy Systems, a company which became Lucas BP Solar Systems, and later BP Solar. The company was a manufacturer and installer of photovoltaic solar cells. It became wholly owned by British Petroleum in the mid-1980s.British Petroleum entered the Russian market in 1990 and opened its first service station in Moscow in 1996. In 1997, it acquired a 10% stake for $571 million in the Russian oil company Sidanco, which later became a part of TNK-BP. Sidanco was run by Russian oligarch Vladimir Potanin who obtained Sidanco through the controversial loans-for-shares privatization scheme. In 2003, BP invested $8 billion into a joint venture with Russian oligarch Mikhail Fridman's TNK.In 1992, the company entered the Azerbaijani market. In 1994, it signed the production sharing agreement for the Azeri–Chirag–Guneshli oil project and in 1995 for the Shah Deniz gas field development. 1998 to 2009 Under John Browne, British Petroleum acquired other oil companies, transforming BP into the third largest oil company in the world. British Petroleum merged with Amoco (formerly Standard Oil of Indiana) in December 1998, becoming BP Amoco plc. Most Amoco stations in the United States were converted to BP's brand and corporate identity. In 2000, BP Amoco acquired Atlantic Richfield Co. (ARCO) and Burmah Castrol. Together with the acquisition of ARCO in 2000, BP became owner of a 33.5% stake in the Olympic Pipeline. Later that year, BP became an operator of the pipeline and increased its stake up to 62.5%.As part of the merger's brand awareness, the company helped the Tate Modern gallery of British Art launch RePresenting Britain 1500–2000. In 2001, in response to negative press on British Petroleum's poor safety standards, the company adopted a green sunburst logo and rebranded itself as BP ("Beyond Petroleum") plc. In the beginning of the 2000s, BP became the leading partner (and later operator) of the Baku–Tbilisi–Ceyhan pipeline project which opened a new oil transportation route from the Caspian region. In 2002, BP acquired the majority of Veba Öl AG, a subsidiary of VEBA AG, and subsequently rebranded its existing stations in Germany to the Aral name. As part of the deal, BP acquired also the Veba Öl's stake in Ruhr Öl joint venture. Ruhr Öl was dissolved in 2016.On 1 September 2003, BP and a group of Russian billionaires, known as AAR (Alfa–Access–Renova), announced the creation of a strategic partnership to jointly hold their oil assets in Russia and Ukraine. As a result, TNK-ВР was created.In 2004, BP's olefins and derivatives business was moved into a separate entity which was sold to Ineos in 2005. In 2007, BP sold its corporate-owned convenience stores, typically known as "BP Connect", to local franchisees and jobbers.On 23 March 2005, 15 workers were killed and more than 170 injured in the Texas City Refinery explosion. To save money, major upgrades to the 1934 refinery had been postponed. Browne pledged to prevent another catastrophe. Three months later, 'Thunder Horse PDQ', BP's giant new production platform in the Gulf of Mexico, nearly sank during a hurricane. In their rush to finish the $1 billion platform, workers had installed a valve backwards, allowing the ballast tanks to flood. Inspections revealed other shoddy work. Repairs costing hundreds of millions would keep Thunder Horse out of commission for three years.Lord Browne resigned from BP on 1 May 2007. The head of exploration and production Tony Hayward became the new chief executive. In 2009, Hayward shifted emphasis from Lord Browne's focus on alternative energy, announcing that safety would henceforth be the company's "number one priority".In 2007, BP formed with AB Sugar and DuPont a joint venture Vivergo Fuels which opened a bioethanol plant in Saltend near Hull, United Kingdom in December 2012. Together with DuPont, BP formed a biobutanol joint venture Butamax by acquiring biobutan technology company Biobutanol LLC in 2009.In 2009, BP obtained a production contract to develop the supergiant Rumaila field with joint venture partner CNPC. 2010 to 2020 In January 2010, Carl-Henric Svanberg became chairman of BP board of directors.On 20 April 2010, the Deepwater Horizon oil spill, a major industrial accident, happened. Consequently, Bob Dudley replaced Tony Hayward as the company's CEO, serving from October 2010 to February 2020. BP announced a divestment program to sell about $38 billion worth of non-core assets to compensate its liabilities related to the accident. In July 2010, BP sold its natural gas activities in Alberta and British Columbia, Canada, to Apache Corporation. It sold its stake in the Petroperijá and Boquerón fields in Venezuela and in the Lan Tay and Lan Do fields, the Nam Con Son pipeline and terminal, and the Phu My 3 power plant in Vietnam to TNK-BP, forecourts and supply businesses in Namibia, Botswana, Zambia, Tanzania and Malawi to Puma Energy, the Wytch Farm onshore oilfield in Dorset and a package of North Sea gas assets to Perenco, natural-gas liquids business in Canada to Plains All American Pipeline LP, natural gas assets in Kansas to Linn Energy, Carson Refinery in Southern California and its ARCO retail network to Tesoro, Sunray and Hemphill gas processing plants in Texas, together with their associated gas gathering system, to Eagle Rock Energy Partners, the Texas City Refinery and associated assets to Marathon Petroleum, the Gulf of Mexico located Marlin, Dorado, King, Horn Mountain, and Holstein fields as also its stake in non-operated Diana Hoover and Ram Powell fields to Plains Exploration & Production, non-operating stake in the Draugen oil field to Norske Shell, and the UK's liquefied petroleum gas distribution business to DCC. In November 2012, the U.S. Government temporarily banned BP from bidding any new federal contracts. The ban was conditionally lifted in March 2014.In February 2011, BP formed a partnership with Reliance Industries, taking a 30% stake in a new Indian joint-venture for an initial payment of $7.2 billion. In September 2012, BP sold its subsidiary BP Chemicals (Malaysia) Sdn. Bhd., an operator of the Kuantan purified terephthalic acid (PTA) plant in Malaysia, to Reliance Industries for $230 million. In October 2012, BP sold its stake in TNK-BP to Rosneft for $12.3 billion in cash and 18.5% of Rosneft's stock. The deal was completed on 21 March 2013. In 2012, BP acquired an acreage in the Utica Shale but these developments plans were cancelled in 2014.In 2011–2015, BP cut down its alternative energy business. The company announced its departure from the solar energy market in December 2011 by closing its solar power business, BP Solar. In 2012, BP shut down the BP Biofuels Highlands project which was developed since 2008 to make cellulosic ethanol from emerging energy crops like switchgrass and from biomass. In 2015, BP decided to exit from other lignocellulosic ethanol businesses. It sold its stake in Vivergo to Associated British Foods. BP and DuPont also mothballed their joint biobutanol pilot plant in Saltend.In June 2014, BP agreed to a deal worth around $20 billion to supply CNOOC with liquefied natural gas. In 2014, Statoil Fuel & Retail sold its aviation fuel business to BP. To ensure the approval of competition authorities, BP agreed to sell the former Statoil aviation fuel businesses in Copenhagen, Stockholm, Gothenburg and Malmö airports to World Fuel Services in 2015.In 2016, BP sold its Decatur, Alabama, plant to Indorama Ventures, of Thailand. At the same year, its Norwegian daughter company BP Norge merged with Det Norske Oljeselskap to form Aker BP.In April 2017, the company reached an agreement to sell its Forties pipeline system in the North Sea to Ineos for $250 million. The sale included terminals at Dalmeny and Kinneil, a site in Aberdeen, and the Forties Unity Platform. In 2017, the company floated its subsidiary BP Midstream Partners LP, a pipeline operator in the United States, at the New York Stock Exchange. In Argentina, BP and Bridas Corporation agreed to merge their interests in Pan American Energy and Axion Energy to form a jointly owned Pan American Energy Group.In 2017, BP invested $200 million to acquire a 43% stake in the solar energy developer Lightsource Renewable Energy, a company which was renamed Lightsource BP. In March 2017, the company acquired Clean Energy's biomethane business and assets, including its production sites and existing supply contracts. In April 2017, its subsidiary Butamax bought an isobutanol production company Nesika Energy.In 2018, the company purchased BHP's shale assets in Texas and Louisiana, including Petrohawk Energy, for $10.5 billion, which were integrated with its subsidiary BPX Energy. Also in 2018, BP bought a 16.5% interest in the Clair field in the UK from ConocoPhillips, increasing its share to 45.1%. BP paid £1.3 billion and gave to ConocoPhillips its 39.2% non-operated stake in the Kuparuk River Oil Field and satellite oil fields in Alaska. In December 2018, BP sold its wind assets in Texas.In 2018, BP acquired Chargemaster, which operated the UK's largest electric vehicle charging network. In 2019, BP and Didi Chuxing formed a joint venture to build out electric vehicle charging infrastructure in China. In September 2020, BP announced it will build out a rapid charging network in London for Uber.In January 2019, BP discovered 1 billion barrels (160×10^6 m3) oil at its Thunder Horse location in the Gulf of Mexico. The company also announced plans to spend $1.3 billion on a third phase of its Atlantis field near New Orleans. 2020 to present Helge Lund succeeded Carl-Henric Svanberg on 1 January 2019 as chairman of BP Plc board of directors, and Bernard Looney succeeded Bob Dudley on 5 February 2020 as chief executive. Amidst the COVID-19 pandemic, BP claimed that it would "accelerate the transition to a lower carbon economy and energy system" after announcing that the company had to write down $17.5 billion for the second quarter of 2020.On 29 June 2020, BP sold its petrochemicals unit to Ineos for $5 billion. The business was focused on aromatics and acetyls. It had interests in 14 plants in Asia, Europe and the U.S., and achieved production of 9.7 million metric tons in 2019. On 30 June 2020, BP sold all its Alaska upstream operations and interests, including interests in Prudhoe Bay Oil Field, to Hilcorp for $5.6 billion. On 14 December 2020, it sold its 49% stake in the Trans-Alaska Pipeline System to Harvest Alaska.In September 2020, BP formed a partnership with Equinor to develop offshore wind and announced it will acquire 50% non-operating stake in the Empire Wind off New York and Beacon Wind off Massachusetts offshore wind farms. The deal is expected to be completed at the first half of 2021. In December 2020, BP acquired a majority stake in Finite Carbon, the largest forest carbon offsets developer in the United States.In response to the 2022 Russian invasion of Ukraine, BP announced that it would sell its 19.75% stake in Rosneft, although no timeline was announced. At the time of BP's decision, Rosneft's activities accounted for around half of BP's oil and gas reserves and a third of its production. BP's decision came after the British government expressed concern about BP's involvement in Russia. However, BP remained a Rosneft shareholder throughout the whole 2022 year, which caused some criticism from the Ukrainian president's office.In October 2022, BP announced that it would be acquiring Archaea Energy Inc., a renewable natural gas producer, for $4.1 billion. In December 2022, it was announced BP had completed the acquisition of Archaea Energy Inc. for $3.3 billion. In November 2022, the company announced a large increase in profit for the period from July to September due to the high fuel prices caused by the Russian invasion of Ukraine.In February 2023, BP reported record annual profits, on a replacement cost basis, for the year 2022. On that basis, 2022 profits were more than double than in 2021, and they were also the biggest profits in the whole 114-year long history of BP.After 10 years of force majeure, BP, Eni and Sonatrach resumed exploration in their blocks in the Ghadames Basin (A-B) and offshore Block C in August 2023, continuing their contract obligations. Logo evolution Operations As of 31 December 2018, BP had operations in 78 countries worldwide with the global headquarters in London, United Kingdom. BP operations are organized into three business segments, Upstream, Downstream, and renewables.Since 1951, BP has annually published its Statistical Review of World Energy, which is considered an energy industry benchmark. Operations by location United Kingdom BP has a major corporate campus in Sunbury-on-Thames which is home to around 3,500 employees and over 50 business units. Its North Sea operations are headquartered in Aberdeen, Scotland. BP's trading functions are based at 20 Canada Square in Canary Wharf, London. BP has three major research and development centres in the UK.As of 2020, and following the sale of its Andrew and Shearwater interests, BP's operations were focussed in the Clair, Quad 204 and ETAP hubs. In 2011, the company announced that it is focusing its investment in the UK North Sea into four development projects including the Clair, Devenick, Schiehallion and Loyal, and Kinnoull oilfields. BP is the operator of the Clair oilfield, which has been appraised as the largest hydrocarbon resource in the UK.There are 1,200 BP service stations in the UK. Since 2018 BP operates the UK's largest electric vehicle charging network through its subsidiary BP Pulse (formerly Chargemaster).In February 2020, BP announced a Joint Venture with EnBW to develop and operate 3GW off Offshore Wind capacity in the Crown Estate Leasing Round 4. This is BP's first move into Britain's Offshore wind market, however, BP currently provides a range of services to the Offshore Wind sector in the UK through its subsidiary ONYX InSight who provide a range of Predictive Maintenance and Engineering Consultancy services to the sector.In February 2022, BP announced it acquired a 30% stake in the London-based company, Green Biofuels Ltd, a producer of renewable hydrogenated vegetable oil fuels that can be used as a direct replacement for diesel. United States The United States operations comprise nearly one-third of BP's operations. BP employs approximately 14,000 people in the United States. In 2018, BP's total production in the United States included 385,000 barrels per day (61,200 m3/d) of oil and 1.9 billion cubic feet per day (54 million cubic metres per day) of natural gas, and its refinery throughput was 703,000 barrels per day (111,800 m3/d).BP's major subsidiary in the United States is BP America, Inc. (formerly: Standard Oil Company (Ohio) and Sohio) based in Houston, Texas. BP Exploration & Production Inc., a 1996 established Houston-based subsidiary, is dealing with oil exploration and production. BP Corporation North America, Inc., provides petroleum refining services as also transportation fuel, heat and light energy. BP Products North America, Inc., a 1954 established Houston-based subsidiary, is engaged in the exploration, development, production, refining, and marketing of oil and natural gas. BP America Production Company, a New Mexico-based subsidiary, engages in oil and gas exploration and development. BP Energy Company, a Houston-based subsidiary, is a provider of natural gas, power, and risk management services to the industrial and utility sectors and a retail electric provider in Texas.BP's upstream activities in the Lower 48 states are conducted through Denver-based BPX Energy. It has a 7.5 billion barrels (1.19 billion cubic metres) resource base on 5.7 million acres (23,000 km2). It has shale positions in the Woodford, Oklahoma, Haynesville, Texas, and Eagle Ford, Texas shales. It has unconventional gas (shale gas or tight gas) stakes also in Colorado, New Mexico and Wyoming, primarily in the San Juan Basin.As of 2019, BP produced about 300,000 barrels per day (48,000 m3/d) of oil equivalent in the Gulf of Mexico. BP operates the Atlantis, Mad Dog, Na Kika, and Thunder Horse production platforms while holding interest in hubs operated by other companies. In April 2023, BP launched a new oil rig, the Argos, in the Gulf.BP operates Whiting Refinery in Indiana and Cherry Point Refinery in Washington. It formerly co-owned and operated a refinery in Toledo, Ohio, with Husky Energy, but sold its stake in the refinery in February 2023 to Cenovus Energy.BP operates nine onshore wind farms in six states, and held an interest in another in Hawaii with a net generating capacity of 1,679 MW. These wind farms include the Cedar Creek 2, Titan 1, Goshen North, Flat Ridge 1 and 2, Mehoopany, Fowler Ridge 1, 2 and 3 and Auwahi wind farms. It is also in process to acquire 50% non-operating stake in the Empire Wind off New York and Beacon Wind off Massachusetts offshore wind farms. Other locations In Egypt, BP produces approximately 15% of the country's total oil production and 40% of its domestic gas. The company also has offshore gas developments in the East Nile Delta Mediterranean, and in the West Nile Delta, where the company has a joint investment of US$9 billion with Wintershall Dea to develop North Alexandria and West Mediterranean concessions offshore gas fields.BP is active in offshore oil development in Angola, where it holds an interest in a total of nine oil exploration and production blocks covering more than 30,000 square kilometres (12,000 sq mi). This includes four blocks it acquired in December 2011 and an additional block that is operated by Brazilian national oil company, Petrobras, in which it holds a 40% stake.BP has a stake in exploration of two blocks of offshore deepwater assets in the South China Sea.In India, BP owns a 30% share of oil and gas assets operated by Reliance Industries, including exploration and production rights in more than 20 offshore oil and gas blocks, representing an investment of more than US$7 billion into oil and gas exploration in the country.BP has major liquefied natural gas activities in Indonesia, where it operates the Tangguh LNG project, which began production in 2009 and has a capacity of 7.6 million tonnes of liquid natural gas per year. Also in that country, the company has invested in the exploration and development of coalbed methane.BP operates in Iraq as part of the joint venture Rumaila Operating Organization in the Rumaila oil field, the world's fourth largest oilfield, where it produced over 1 million barrels per day (160×10^3 m3/d) of oil equivalent in 2011. A BBC investigation found in 2022 that waste-gas was being burned as close as 350 meters from people's homes. A leaked report from Ministry of Health (Iraq) blamed air pollution for 20% rise in cancer in Basra between 2015 and 2018. The Iraqi Ministry of Health has banned its employees from speaking about the health damage. Iraqi Environment Minister Jassem al-Falahi later admitted that "pollution from oil production is the main reason for increases in local cancer rates."In Oman, BP currently has a 60% participation interest in Block 61. Block 61 is one of Oman's largest gas blocks with a daily production capacity of 1.5 billion cubic feet of gas and more than 65,000 barrels of condensate. It covers around 3,950 km in central Oman and contains the largest tight gas development in the Middle East. On 1 February 2021, BP inked a deal to sell 20% participating interest in Block 61 to Thailand's PTT Exploration and Production Public Company Ltd. (PTTEP) for a total of $2.6 billion. Upon closure of the sale, the BP will remain the block's operator with a 40% interest. BP operates the Kwinana refinery in Western Australia, which can process up to 146,000 barrels per day (23,200 m3/d) of crude oil and is the country's largest refinery, supplying fuel to 80% of Western Australia. BP is a non-operating joint venture partner in the North West Shelf, which produces LNG, pipeline gas, condensate and oil. The NWS venture is Australia's largest resource development and accounts for around one third of Australia's oil and gas production.BP operates the two largest oil and gas production projects in the Azerbaijan's sector of the Caspian Sea, the Azeri–Chirag–Guneshli offshore oil fields, which supplies 80% of the country's oil production, and the Shah Deniz gas field. It also and develops the Shafag-Asiman complex of offshore geological structures. In addition, it operates the Sangachal terminal and the Azerbaijan's major export pipelines through Georgia such as Baku–Tbilisi–Ceyhan, Baku–Supsa and South Caucasus pipelines. BP's refining operations in continental Europe include Europe's second-largest oil refinery, located in Rotterdam, the Netherlands, which can process up to 377,000 barrels (59,900 m3) of crude oil per day. Other facilities are located in Ingolstadt, Gelsenkirchen and Lingen, in Germany, as well as one in Castellón, Spain.In addition to its offshore operations in the British zone of North Sea, BP has interests in the Norwegian section of the sea through its stake in Aker BP. As of December 2018, BP holds a 19.75% stake in Russia's state-controlled oil company Rosneft.Retail operations of motor vehicle fuels in Europe are present in the United Kingdom, France, Germany (through the Aral brand), the Netherlands, Switzerland, Italy, Austria, Poland, Greece and Turkey.BP's Canadian operations are headquartered in Calgary and the company operates primarily in Newfoundland. It purchases crude oil for the company's refineries in the United States, and has a 35 per cent stake in the undeveloped Bay du Nord project and three offshore exploration block in Newfoundland.BP is the largest oil and gas producer in Trinidad and Tobago, where it holds more than 1,350 square kilometres (520 sq mi) of offshore assets and is the largest shareholder in Atlantic LNG, one of the largest LNG plants in Western Hemisphere.In Brazil, BP holds stakes in offshore oil and gas exploration in the Barreirinhas, Ceará and Campos basins, in addition to onshore processing facilities. BP also operates biofuel production facilities in Brazil, including three cane sugar mills for ethanol production.BP operated in Singapore until 2004 when it sold its retail network of 28 stations and LPG business to Singapore Petroleum Company (SPC). It also sold its 50% in SPC.BP's Türkiye Operator was Petrol Ofisi. (Vitol) (Agreement in place expected 2024.) Exploration and production BP Upstream's activities include exploring for new oil and natural gas resources, developing access to such resources, and producing, transporting, storing and processing oil and natural gas. The activities in this area of operations take place in 25 countries worldwide. In 2018, BP produced around 3.7 million barrels per day (590×10^3 m3/d) of oil equivalent, of which 2.191 million barrels per day (348.3×10^3 m3/d) were liquids and 8.659 billion cubic feet per day (245.2 million cubic metres per day) was natural gas, and had total proved reserves of 19,945 million barrels (3,171.0×10^6 m3) of oil equivalent, of which liquids accounted 11,456 million barrels (1,821.4×10^6 m3) barrels and natural gas 49.239 trillion cubic feet (1.3943 trillion cubic metres). In addition to the conventional oil exploration and production, BP has a stake in the three oil sands projects in Canada.BP expects its oil and gas production to fall by at least one million barrels a day by 2030, a 40% reduction on 2019 levels. The reduction excludes non-operated production and BP's stake in Rosneft. Refining and marketing BP downstream's activities include the refining, marketing, manufacturing, transportation, trading and supply of crude oil and petroleum products. Downstream is responsible for BP's fuels and lubricants businesses, and has major operations located in Europe, North America and Asia. As of 2018, BP owned or had a share in 11 refineries.BP, which employs about 1,800 people in oil trading and trades over 5 million barrels per day (790×10^3 m3/d) of oil and refined products, is the world's third-biggest oil trader after Royal Dutch Shell and Vitol. The operation is estimated to be able to generate over $1 billion trading profits in a good year.Air BP is the aviation division of BP, providing aviation fuel, lubricants & services. It has operations in over 50 countries worldwide. BP Shipping provides the logistics to move BP's oil and gas cargoes to market, as well as marine structural assurance. It manages a large fleet of vessels most of which are held on long-term operating leases. BP Shipping's chartering teams based in London, Singapore, and Chicago also charter third party vessels on both time charter and voyage charter basis. The BP-managed fleet consists of Very Large Crude Carriers (VLCCs), one North Sea shuttle tanker, medium size crude and product carriers, liquefied natural gas (LNG) carriers, liquefied petroleum gas (LPG) carriers, and coasters. All of these ships are double-hulled.BP has around 18,700 service stations worldwide. Its flagship retail brand is BP Connect, a chain of service stations combined with a convenience store, although in the US it is gradually being transitioned to the ampm format. BP also owns half of Kentucky-based convenience store company Thorntons LLC with ArcLight Capital Partners (who own the Gulf brand in the United States) since 2019. On 13 July 2021, BP announced it will take acquire ArcLight Capital Partners' share of Thorntons, and thus fully own the convenience store company. The deal is expected to close later in the year. In Germany and Luxembourg, BP operates service stations under the Aral brand. On the US West Coast, in the states of California, Oregon, Washington, Nevada, Idaho, Arizona, and Utah, BP primarily operates service stations under the ARCO brand. In Australia BP operates a number of BP Travel Centres, large-scale destination sites located which, in addition to the usual facilities in a BP Connect site, also feature food-retail tenants such as McDonald's, KFC and Nando's and facilities for long-haul truck drivers.Castrol is BP's main brand for industrial and automotive lubricants and is applied to a large range of BP oils, greases and similar products for most lubrication applications. Clean energy rhetoric BP's public rhetoric and pledges emphasise that the company is shifting towards climate-friendly, low-carbon and transition strategies. However, a 2022 study found that the company's spending on clean energy was insignificant and opaque, with little to suggest that the company's discourse matched its actions.BP was the first of supermajors to say that it would focus on energy sources other than fossil fuels. It established an alternative and low carbon energy business in 2005. According to the company, it spent a total of $8.3 billion in renewable energy projects including solar, wind, and biofuels, and non-renewable projects including natural gas and hydrogen power, through completion in 2013. The relatively small size of BP's alternative energy operations has led to allegations of greenwashing by Greenpeace, Mother Jones, and energy analyst and activist Antonia Juhasz, among others. In 2018, the CEO Bob Dudley said that out of the company's total spending of $15 to $17 billion per year, about $500 million will be invested in low-carbon energy and technology. In August 2020, BP promised to increase its annual low carbon investments to $5 billion by 2030. The company announced plans to transform into an integrated energy company, with a renewed focus on investing away from oil and into low-carbon technologies. It has set targets to have a renewables portfolio of 20 GW by 2025, and 50 GW by 2030.BP operates nine wind farms in seven states of the U.S., and held an interest in another in Hawaii with a net generating capacity of 1,679 MW. It is also in process to acquire 50% non-operating stake in the Empire Wind off New York and Beacon Wind off Massachusetts offshore wind farms. BP and Tesla, Inc. are cooperating for testing the energy storage by battery at the Titan 1 wind farm. BP Launchpad has also invested in ONYX InSight, one of the leading providers of predictive analytic solutions serving the wind industry.In Brazil, BP owns two ethanol producers—Companhia Nacional de Açúcar e Álcool andTropical BioEnergia—with three ethanol mills. These mills produce around 800,000 cubic metres per annum (5,000,000 bbl/a) of ethanol equivalent. BP has invested in an agricultural biotechnology company Chromatin, a company developing crops that can grow on marginal land and that are optimized to be used as feedstock for biofuel. Its joint venture with DuPont called Butamax, which has developed the patented bio-butanol-producing technology, and owns an isobutanol plant in Scandia, Kansas, United States. In addition BP owns biomethane production facilities in Canton, Michigan, and North Shelby, Tennessee, as well as share of facilities under construction in Oklahoma City and Atlanta. BP's subsidiary Air BP supplies aviation biofuel at Oslo, Halmstad, and Bergen airports.BP owns a 43% stake in Lightsource BP, a company which focuses on the managing and maintaining solar farms. As of 2017, Lightsource has commissioned 1.3 GW of solar capacity and manages about 2 GW of solar capacity. It plans to increase the capacity up to 8 GW through projects in the United States, India, Europe and the Middle East. BP has invested $20 million in Israeli quick-charging battery firm StoreDot Ltd. It operates electric vehicle charging networks in the UK under its subsidiary BP Chargemaster, and in China via a joint venture with Didi Chuxing.In partnership with Ørsted A/S, BP plans a 50 MV electrolyser at the Lingen refinery to produce hydrogen using North Sea wind power. Production is expected to begin in 2024.BP is a majority shareholder in carbon offset developer Finite Carbon, and acquired 9 GW of US solar projects in 2021.In 2023, following the announcement of record profits, the company scaled back their emissions targets. Originally, the company promised a 35-40% cut of emissions by the end of the decade. On 7 February, BP revised the target to a 20-30% cut in emissions, stating that it needed to keep up with the current demands for oil and gas. Corporate affairs Management As of October 2023, the following individuals serve on the board: Helge Lund (chairman) Murray Auchincloss (acting chief executive officer) Paula Rosput Reynolds (senior independent director) Amanda Blanc (independent non-executive director) Pamela Daley (independent non-executive director) Melody Meyer (independent non-executive director) Tushar Morzaria (independent non-executive director) Hina Nagarajan (independent non-executive director) Satish Pai (independent non-executive director) Karen Richardson (independent non-executive director) Sir John Sawers (independent non-executive director) Johannes Teyssen (independent non-executive director) Ben Mathews (company secretary) Past chairmen Past chairmen have included: The Lord Strathalmond, 1954–1956 Basil Jackson, 1956–1957 Sir Neville Gass, 1957–1960 Sir Maurice Bridgeman, 1960–1969 Sir Eric Drake, 1969–1975 Sir David Steel, 1975–1981 Sir Peter Walters, 1981–1990 Sir Robert Horton, 1990–1992 The Lord Ashburton, 1992–1995 The Lord Simon of Highbury, 1995–1997 Peter Sutherland, 1997–2009 Carl-Henric Svanberg, 2010–2018 Helge Lund, 2019– Stock The company's shares are primarily traded on the London Stock Exchange, but also listed on the Frankfurt Stock Exchange in Germany. In the United States shares are traded in US$ on the New York Stock Exchange in the form of American depository shares (ADS). One ADS represents six ordinary shares.Following the United States Federal Trade Commission's approval of the BP-Amoco merger in 1998, Amoco's stock was removed from the S&P 500 and was merged with BP shares on the London Stock Exchange. Branding and public relations In the first quarter of 2001 the company adopted the marketing name of BP, and replaced its "Green Shield" logo with the "Helios" symbol, a green and yellow sunflower logo named after the Greek sun god and designed to represent energy in its many forms. BP introduced a new corporate slogan – "Beyond Petroleum" along with a $200M advertising and marketing campaign. According to the company, the new slogan represented their focus on meeting the growing demand for fossil fuels, manufacturing and delivering more advanced products, and to enable transitioning to a lower carbon footprint.By 2008, BP's branding campaign had succeeded with the culmination of a 2007 Effie Award from the American Marketing Association, and consumers had the impression that BP was one of the greenest petroleum companies in the world. BP was criticised by environmentalists and marketing experts, who stated that the company's alternative energy activities were only a fraction of the company's business at the time. According to Democracy Now, BP's marketing campaign amounted to a deceptive greenwashing public-relations spin campaign given that BP's 2008 budget included more than $20 billion for fossil fuel investment and less than $1.5 billion for all alternative forms of energy. Oil and energy analyst Antonia Juhasz notes BP's investment in green technologies peaked at 4% of its exploratory budget prior to cutbacks, including the discontinuation of BP Solar and the closure of its alternative energy headquarters in London. According to Juhasz, "four percent...hardly qualifies the company to be Beyond Petroleum", citing BP's "aggressive modes of production, whether it's the tar sands [or] offshore".BP attained a negative public image from the series of industrial accidents that occurred through the 2000s, and its public image was severely damaged after the Deepwater Horizon explosion and Gulf Oil spill. In the immediate aftermath of the spill, BP initially downplayed the severity of the incident, and made many of the same PR errors that Exxon had made after the Exxon Valdez disaster. CEO Tony Hayward was criticised for his statements and had committed several gaffes, including stating that he "wanted his life back." Some in the media commended BP for some of its social media efforts, such as the use of Twitter and Facebook as well as a section of the company's website where it communicated its efforts to clean up the spill.In February 2012 BP North America launched a $500 million branding campaign to rebuild its brand.The company's advertising budget was about $5 million per week during the four-month spill in the Gulf of Mexico, totalling nearly $100 million.In May 2012, BP tasked a press office staff member to openly join discussions on the Wikipedia article's talk page and suggest content to be posted by other editors. Controversy emerged in 2013 over the amount of content from BP that had entered this article. Wikipedia co-founder Jimmy Wales stated that, by identifying himself as a BP staff member, the contributor in question had complied with site policy regarding conflicts of interest. Integrity and compliance Investigative journalism by BBC Panorama and Africa Eye aired in June 2019 criticising BP for the way in which it had obtained the development rights of Cayar Offshore Profond and St. Louis Offshore Profond blocks, off the coast of Senegal in 2017. In 2012, a Frank Timiș company, Petro-Tim, though previously unknown to the oil industry, was awarded a license to explore the blocks despite having no known record in the industry. Soon after, Aliou Sall, brother of Senegal's president, Macky Sall, was hired at the company, implying a conflict of interest, causing public outrage in Senegal. The 2019 program by BBC Panorama and Africa Eye accuses BP of a failure in due diligence when it agreed on a deal with Timis Corporation in 2017. The deal by BP is expected to provide substantial royalties to Frank Timiș despite accusations of initially obtaining the exploration rights through corruption. Kosmos Energy was also implicated. BP refutes any implications of improper conduct. Regarding the acquisition of Timis Corporation interests in Senegal in April 2017, BP states that it "paid what it considered a fair market value for the interests at this stage of exploration/development". However, BP has not made public what was the basis of the valuation, and states that "the details of the deal are confidential". BP argues that "the amount which would be paid separately by BP to Timis Corporation would be less than one percent of what the Republic of Senegal would receive". Senegal's justice ministry has called an inquiry into the energy contracts. LGBTQ recognition In 2014, BP backed a global study researching challenges for lesbian, gay, bisexual and transgender employees and for ways that companies can be a "force for change" for LGBT workers around the world. In 2015, Reuters wrote that BP is "known for their more liberal policies for gay and transgender workers". A 2016 article in the Houston Chronicle said BP was "among the first major companies in the United States to offer LGBT workers equal protection and benefits roughly 20 years ago". BP scored a 100% on the 2018 Human Rights Campaign's Corporate Equality Index, which was released in 2017, although this was the most common score. Also in 2017, BP added gender reassignment surgery to its list of benefits for U.S. employees. According to the Human Rights Campaign, BP is one of only a few oil and gas companies offering transgender benefits to its employees. BP ranked No. 51 on the list of Top 100 employers for lesbian, gay, bisexual and transgender staff on the 2017 Stonewall Workplace Equality Index. Also in 2017, John Mingé, chairman and president of BP America, signed a letter alongside other Houston oil executives denouncing the proposed "bathroom bill" in Texas. Environmental record Climate policy Prior to 1997, BP was a member of the Global Climate Coalition, an industry organisation established to promote global warming scepticism, but withdrew in 1997, saying "the time to consider the policy dimensions of climate change is not when the link between greenhouse gases and climate change is conclusively proven, but when the possibility cannot be discounted and is taken seriously by the society of which we are part. We in BP have reached that point.". BP was distinguished as the first multinational outside of the reinsurance industry to publicly support the scientific consensus on climate change, which Pew Center on Global Climate Change president Eileen Claussen then described as a transformative moment on the issue. In March 2002, Lord John Browne, the group chief executive of BP that time, declared in a speech that global warming was real and that urgent action was needed. Notwithstanding this, from 1988 to 2015 BP was responsible for 1.53% of global industrial greenhouse gas emissions. In 2015, BP was listed by the UK-based non-profit organisation Influence Map as the fiercest opponent of action on climate change in Europe. In 2018, BP was the largest contributor to the campaign opposing carbon fee initiative 1631 in Washington State. Robert Allendorfer, manager of BP's Cherry Point refinery, wrote the following in a letter to state lawmakers: "[Initiative 1631] would exempt six of the ten largest stationary source emitters in the state, including a coal-fired power plant, an aluminum smelter, and a number of pulp and paper plants." According to a 2019 Guardian ranking, BP was the 6th largest emitter of greenhouse gases in the world.In February 2020, BP set a goal to cut its greenhouse gas emissions to net-zero by 2050. BP is seeking net-zero carbon emissions across its operations and the fuels the company sells, including emissions from cars, homes, and factories. Details on the scope of this and how this will be achieved are publicly limited. BP said that it is restructuring its operations into four business groups to meet these goals: production and operations; customers and products; gas and low carbon; and innovation and engineering. The company discontinued involvement with American Fuel and Petrochemical Manufacturers, Western States Petroleum Association, and Western Energy Alliance, involved in lobbying government within the United States, because of differences of position on the issue of methane and carbon policies, as a development of this new commitment. However, an investigation conducted by Unearthed, an investigations unit of Greenpeace UK, and HuffPost unveiled eight anti-climate trade associations BP failed to disclose, including Alliance of Western Energy Consumers, Texas Oil and Gas Association, Australian Petroleum Production and Exploration Association, and the Business Council of Australia, among others.In August 2020, BP America's chairman David Lawler criticised elimination of federal requirements to install equipment to detect and fix methane leaks by saying that "direct federal regulation of methane emissions is essential to preventing leaks throughout the industry and protecting the environment."In BP's Energy Outlook 2020, BP stated that the changing energy landscape coupled with the economic toll of the COVID-19 pandemic means that the global crude demand will never again surpass 2019's average. All three scenarios in the outlook see the consumption of coal, oil, and natural gas dropping while the role of renewable energy will soar. BP is also attempting to move from being an international oil company into becoming an integrated energy company that will focus on low-carbon technologies while also setting a target to reduce its overall oil and gas production by 40% by 2030.In 2021, BP was ranked as the 5th most environmentally responsible company out of 120 oil, gas, and mining companies involved in resource extraction north of the Arctic Circle in the Arctic Environmental Responsibility Index (AERI).In December 2022, U.S. House Oversight and Reform Committee Chair Carolyn Maloney and U.S. House Oversight Environment Subcommittee Chair Ro Khanna sent a memorandum to all House Oversight and Reform Committee members summarizing additional findings from the Committee's investigation into the fossil fuel industry disinformation campaign to obscure the role of fossil fuels in causing global warming, and that upon reviewing internal company documents, accused BP along with ExxonMobil, Chevron, and Shell of greenwashing their Paris Agreement carbon neutrality pledges while continuing long-term investment in fossil fuel production and sales, for engaging in a campaign to promote the use of natural gas as a clean energy source and bridge fuel to renewable energy, and of intimidating journalists reporting about the companies' climate actions and of obstructing the Committee's investigation.After initially pledging to reduce its emissions by 35% by 2030, BP stated in 2023 that it would aim for a 20-30% reduction instead. Indigenous rights In a 2016 study, conducted by Indra Øverland of Norwegian Institute of International Affairs BP was ranked 15th out of 18 levels (in total 37th out of 92 oil, gas and mining companies) on indigenous rights and resource extraction in the Arctic. The ranking of companies took into account 20 criteria, such as the companies' commitments to international standards, the presence of organisational units dedicated to handling indigenous rights, competent staffing, track records on indigenous issues, transparency, and procedures for consulting with indigenous peoples, but the actual performance of companies on indigenous rights was not assessed. Hazardous substance dumping 1993–1995 In September 1999, one of BP's US subsidiaries, BP Exploration Alaska (BPXA), pleaded guilty to criminal charges stemming from its illegally dumping of hazardous wastes on the Alaska North Slope, paying fines and penalties totalling $22 million. BP paid the maximum $500,000 in criminal fines, $6.5 million in civil penalties, and established a $15 million environmental management system at all of BP facilities in the US and Gulf of Mexico that are engaged in oil exploration, drilling or production. The charges stemmed from the 1993 to 1995 dumping of hazardous wastes on Endicott Island, Alaska by BP's contractor Doyon Drilling. The firm illegally discharged waste oil, paint thinner and other toxic and hazardous substances by injecting them down the outer rim, or annuli, of the oil wells. BPXA failed to report the illegal injections when it learned of the conduct, in violation of the Comprehensive Environmental Response, Compensation and Liability Act. Air pollution violations In 2000, BP Amoco acquired ARCO, a Los Angeles-based oil group. In 2003, California's South Coast Air Quality Management District (AQMD) filed a complaint against BP/ARCO, seeking $319 million in penalties for thousands of air pollution violations over an 8-year period. In January 2005, the agency filed a second suit against BP based on violations between August 2002 and October 2004. The suit alleged that BP illegally released air pollutants by failing to adequately inspect, maintain, repair and properly operate thousands of pieces of equipment across the refinery as required by AQMD regulations. It was alleged that in some cases the violations were due to negligence, while in others the violations were knowingly and willfully committed by refinery officials. In 2005, a settlement was reached under which BP agreed to pay $25 million in cash penalties and $6 million in past emissions fees, while spending $20 million on environmental improvements at the refinery and $30 million on community programs focused on asthma diagnosis and treatment.In 2013, a total of 474 Galveston County residents living near the BP Texas City Refinery filed a $1 billion lawsuit against BP, accusing the company of "intentionally misleading the public about the seriousness" of a two-week release of toxic fumes which began on 10 November 2011. "BP reportedly released Sulfur Dioxide, Methyl Carpaptan, Dimethyl Disulfide and other toxic chemicals into the atmosphere" reads the report. The lawsuit further claims Galveston county has the worst air quality in the United States due to BP's violations of air pollution laws. BP had no comment and said it would address the suit in the court system. Colombian farmland damages claim In 2006, a group of Colombian farmers reached a multimillion-dollar out-of-court settlement with BP for alleged environmental damage caused by the Ocensa pipeline. The company was accused of benefiting from a regime of terror carried out by Colombian government paramilitaries to protect the 450-mile (720 km) Ocensa pipeline; BP said throughout that it has acted responsibly and that landowners were fairly compensated.In 2009, another group of 95 Colombian farmers filed a suit against BP, saying the company's Ocensa pipeline caused landslides and damage to soil and groundwater, affecting crops, livestock, and contaminating water supplies, making fish ponds unsustainable. Most of the land traversed by the pipeline was owned by peasant farmers who were illiterate and unable to read the environmental impact assessment conducted by BP prior to construction, which acknowledged significant and widespread risks of damage to the land. The Supreme Court of Justice of Colombia handed down a judgement rejecting the case in August 2016. Canadian oil sands Since 2007, BP has been involved in oil sands projects, which Greenpeace has called a climate crime. Members of Canada's First Nations have criticised BP's involvement for the impacts oil sands extraction has on the environment. In 2010, BP pledged to use only in-situ technologies instead of open-pit mining. It uses steam-assisted gravity drainage in-situ technology to extract bitumen. According to Greenpeace it is even more damaging to climate because while according to the Pembina Institute in-situ techniques result in lower nitrogen oxide emissions, and are less damaging to the landscape and rivers, they cause more greenhouse gas and sulphur dioxide emissions than mining. In 2010, activist shareholders asked BP for a full investigation of the Sunrise oil sands project, but were defeated. In 2013, shareholders criticised the project for being carbon-intensive. Violations and accidents Citing conditions similar to those that resulted in the 2005 Texas City Refinery explosion, on 25 April 2006, the U.S. Department of Labor's Occupational Safety and Health Administration (OSHA) fined BP more than $2.4 million for unsafe operations at the company's Oregon, Ohio refinery. An OSHA inspection resulted in 32 per-instance wilful citations including locating people in vulnerable buildings among the processing units, failing to correct depressurisation deficiencies and deficiencies with gas monitors, and failing to prevent the use of non-approved electrical equipment in locations in which hazardous concentrations of flammable gases or vapours may exist. BP was further fined for neglecting to develop shutdown procedures and designate responsibilities and to establish a system to promptly address and resolve recommendations made after an incident when a large feed pump failed three years prior to 2006. Penalties were also issued for five serious violations, including failure to develop operating procedures for a unit that removes sulphur compound; failure to ensure that operating procedures reflect current operating practice in the Isocracker Unit; failure to resolve process hazard analysis recommendations; failure to resolve process safety management compliance audit items in a timely manner; and failure to periodically inspect pressure piping systems.In 2008, BP and several other major oil refiners agreed to pay $422 million to settle a class-action lawsuit stemming from water contamination tied to the gasoline additive MTBE, a chemical that was once a key gasoline ingredient. Leaked from storage tanks, MTBE has been found in several water systems across the United States. The plaintiffs maintain that the industry knew about the environmental dangers but that they used it instead of other possible alternatives because it was less expensive. The companies will also be required to pay 70% of cleanup costs for any wells newly affected at any time over the next 30 years.BP has one of the worst safety records of any major oil company that operates in the United States. Between 2007 and 2010, BP refineries in Ohio and Texas accounted for 97% of "egregious, willful" violations handed out by the U.S. Occupational Safety and Health Administration (OSHA). BP had 760 "egregious, willful" violations during that period, while Sunoco and Conoco-Phillips each had eight, Citgo two and Exxon had one. The deputy assistant secretary of labour at OSHA, said "The only thing you can conclude is that BP has a serious, systemic safety problem in their company."A report in ProPublica, published in The Washington Post'' in 2010, found that over a decade of internal investigations of BP's Alaska operations during the 2000s warned senior BP managers that the company repeatedly disregarded safety and environmental rules and risked a serious accident if it did not change its ways. ProPublica found that "Taken together, these documents portray a company that systemically ignored its own safety policies across its North American operations – from Alaska to the Gulf of Mexico to California and Texas. Executives were not held accountable for the failures, and some were promoted despite them."The Project On Government Oversight, an independent non-profit organisation in the United States which investigates and seeks to expose corruption and other misconduct, lists BP as number one on their listing of the 100 worst corporations based on instances of misconduct. 1965 Sea Gem offshore oil rig disaster In December 1965, Britain's first oil rig, Sea Gem, capsized when two of the legs collapsed during an operation to move it to a new location. The oil rig had been hastily converted in an effort to quickly start drilling operations after the North Sea was opened for exploration. Thirteen crew members were killed. No hydrocarbons were released in the accident. Texas City Refinery explosion and leaks The former Amoco oil refinery at Texas City, Texas, was beset by environmental issues, including chemical leaks and a 2005 explosion that killed 15 people and injured hundreds. Bloomberg News described the incident, which led to a guilty plea by BP to a felony Clean Air Act charge, as "one of the deadliest U.S. industrial accidents in 20 years." The refinery was sold to Marathon Petroleum in October 2012. 2005 explosion In March 2005, the Texas City Refinery, one of the largest refineries owned then by BP, exploded causing 15 deaths, injuring 180 people and forcing thousands of nearby residents to remain sheltered in their homes. A 20-foot (6.1 m) column filled with hydrocarbon overflowed to form a vapour cloud, which ignited. The explosion caused all the casualties and substantial damage to the rest of the plant. The incident came as the culmination of a series of less serious accidents at the refinery, and the engineering problems were not addressed by the management. Maintenance and safety at the plant had been cut as a cost-saving measure, the responsibility ultimately resting with executives in London.The fallout from the accident clouded BP's corporate image because of the mismanagement at the plant. There had been several investigations of the disaster, the most recent being that from the US Chemical Safety and Hazard Investigation Board which "offered a scathing assessment of the company." OSHA found "organizational and safety deficiencies at all levels of the BP Corporation" and said management failures could be traced from Texas to London. The company pleaded guilty to a felony violation of the Clean Air Act, was fined $50 million, the largest ever assessed under the Clean Air Act, and sentenced to three years probation.On 30 October 2009, the US Occupational Safety and Health Administration (OSHA) fined BP an additional $87 million, the largest fine in OSHA history, for failing to correct safety hazards documented in the 2005 explosion. Inspectors found 270 safety violations that had been cited but not fixed and 439 new violations. BP appealed the fine. In July 2012, the company agreed to pay $13 million to settle the new violations. At that time OSHA found "no imminent dangers" at the Texas plant. Thirty violations remained under discussion. In March 2012, US Department of Justice officials said the company had met all of its obligations and subsequently ended the probationary period. In November 2011, BP agreed to pay the state of Texas $50 million for violating state emissions standards at its Texas City refinery during and after the 2005 explosion at the refinery. The state Attorney General said BP was responsible for 72 separate pollutant emissions that have been occurring every few months since March 2005. It was the largest fine ever imposed under the Texas Clean Air Act. 2007 toxic substance release In 2007, 143 workers at the Texas City refinery claimed that they were injured when a toxic substance was released at the plant. In December 2009, after a three-week trial, a federal jury in Galveston awarded ten of those workers $10 million each in punitive damages, in addition to smaller damages for medical expenses and pain and suffering. The plant had a history of chemical releases. In March 2010, the federal judge hearing the case reduced the jury's award to less than $500,000. U.S. District Judge Kenneth M. Hoyt said the plaintiffs failed to prove BP was grossly negligent. 2010 chemical leak In August 2010, the Texas Attorney General charged BP with illegally emitting harmful air pollutants from its Texas City refinery for more than a month. BP has admitted that malfunctioning equipment led to the release of over 530,000 pounds (240,000 kg) of chemicals into the air of Texas City and surrounding areas from 6 April to 16 May 2010. The leak included 17,000 pounds (7,700 kg) of benzene, 37,000 pounds (17,000 kg) of nitrogen oxides, and 186,000 pounds (84,000 kg) of carbon monoxide. The State's investigation showed that BP's failure to properly maintain its equipment caused the malfunction. When the equipment malfunctioned and caught fire, BP workers shut it down and routed escaping gases to flares. Rather than shut down associated units while compressor repairs were made, BP chose to keep operating those other units, which led to unlawful release of contaminants for almost 40 days. The Attorney General is seeking civil penalties of no less than $50 nor greater than $25,000 per day of each violation of state air quality laws, as well as attorneys' fees and investigative costs.In June 2012, over 50,000 Texas City residents joined a class-action suit against BP, alleging they became sick in 2010 as a result of the emissions release from the refinery. BP said the release harmed no one. In October 2013, a trial designed as a test for a larger suit that includes 45,000 people found that BP was negligent in the case, but due to the lack of substantial evidence linking illness to the emissions, decided the company would be absolved of any wrongdoing. Prudhoe Bay In March 2006, corrosion of a BP Exploration Alaska (BPXA) oil transit pipeline in Prudhoe Bay transporting oil to the Trans-Alaska Pipeline led to a five-day leak and the largest oil spill on Alaska's North Slope. According to the Alaska Department of Environmental Conservation (ADEC), a total of 212,252 US gallons (5,053.6 bbl; 803.46 m3) of oil was spilled, covering 2 acres (0.81 ha) of the North Slope. BP admitted that cost-cutting measures had resulted in a lapse in monitoring and maintenance of the pipeline and the consequent leak. At the moment of the leak, pipeline inspection gauges (known as "pigs") had not been run through the pipeline since 1998. BP completed the clean-up of the spill by May 2006, including removal of contaminated gravel and vegetation, which was replaced with new material from the Arctic tundra.Following the spill, the company was ordered by regulators to inspect the 35 kilometres (22 mi) of pipelines in Prudhoe Bay using "smart pigs". In late July 2006, the "smart pigs" monitoring the pipelines found 16 places where corrosion had thinned pipeline walls. A BP crew sent to inspect the pipe in early August discovered a leak and small spill, following which, BP announced that the eastern portion of the Alaskan field would be shut down for repairs on the pipeline, with approval from the Department of Transportation. The shutdown resulted in a reduction of 200,000 barrels per day (32,000 m3/d) until work began to bring the eastern field to full production on 2 October 2006. In total, 23 barrels (3.7 m3) of oil were spilled and 176 barrels (28.0 m3) were "contained and recovered", according to ADEC. The spill was cleaned up and there was no impact upon wildlife.After the shutdown, BP pledged to replace 26 kilometres (16 mi) of its Alaskan oil transit pipelines and the company completed work on the 16 miles (26 km) of new pipeline by the end of 2008. In November 2007, BP Exploration, Alaska pleaded guilty to negligent discharge of oil, a misdemeanour under the federal Clean Water Act and was fined US$20 million. There was no charge brought for the smaller spill in August 2006 due to BP's quick response and clean-up. On 16 October 2007, ADEC officials reported a "toxic spill" from a BP pipeline in Prudhoe Bay comprising 2,000 US gallons (7,600 L; 1,700 imp gal) of primarily methanol (methyl alcohol) mixed with crude oil and water, which spilled onto a gravel pad and frozen tundra pond.In the settlement of a civil suit, in July 2011 investigators from the U.S. Department of Transportation's Pipeline and Hazardous Materials Safety Administration determined that the 2006 spills were a result of BPXA's failure to properly inspect and maintain the pipeline to prevent corrosion. The government issued a Corrective Action Order to BP XA that addressed the pipeline's risks and ordered pipeline repair or replacement. The U.S. Environmental Protection Agency had investigated the extent of the oil spills and oversaw BPXA's cleanup. When BP XA did not fully comply with the terms of the corrective action, a complaint was filed in March 2009 alleging violations of the Clean Water Act, the Clean Air Act and the Pipeline Safety Act. In July 2011, the U.S. District Court for the District of Alaska entered a consent decree between the United States and BPXA resolving the government's claims. Under the consent decree, BPXA paid a $25 million civil penalty, the largest per-barrel penalty at that time for an oil spill, and agreed to take measures to significantly improve inspection and maintenance of its pipeline infrastructure on the North Slope to reduce the threat of additional oil spills. 2008 Caspian Sea gas leak On 17 September 2008, a small gas leak was discovered and one gas-injection well broached to surface in the area of the Central Azeri platform at the Azeri oilfield, a part of the Azeri–Chirag–Guneshli (ACG) project, in the Azerbaijan sector of Caspian Sea. The platform was shut down and the staff was evacuated. As the West Azeri Platform was being powered by a cable from the Central Azeri Platform, it was also shut down. Production at the West Azeri Platform resumed on 9 October 2008 and at the Central Azeri Platform in December 2008. According to leaked US Embassy cables, BP had been "exceptionally circumspect in disseminating information" and showed that BP thought the cause for the blowout was a bad cement job. The cables further said that some of BP's ACG partners complained that the company was so secretive that it was withholding information even from them. California storage tanks Santa Barbara County District Attorney sued BP West Coast Products LLC, BP Products North America, Inc., and Atlantic Richfield Company over allegations that the companies violated state laws regarding operating and maintaining motor vehicle fuel underground storage tank laws. BP settled a lawsuit for $14 million. The complaint alleged that BP failed to properly inspect and maintain underground tanks used to store gasoline for retail sale at approximately 780 gas stations in California over a period of ten years and violated other hazardous material and hazardous waste laws. The case settled in November 2016 and was the result of collaboration among the California Attorney General's Office and several district attorney's offices across the state. Deepwater Horizon explosion and oil spill The Deepwater Horizon oil spill was a major industrial accident on the Gulf of Mexico, which killed 11 people and injured 16 others, leaked about 4.9 million barrels (210 million US gal; 780,000 m3) of oil with plus or minus 10% uncertainty, which makes it the largest accidental marine oil spill in the history of the petroleum industry, and cost to the company more than $65 billion of cleanup costs, charges and penalties. On 20 April 2010, the semi-submersible exploratory offshore drilling rig Deepwater Horizon located in the Macondo Prospect in the Gulf of Mexico exploded after a blowout. After burning for two days, the rig sank. The well was finally capped on 15 July 2010. Of 4.9 million barrels (210 million US gal; 780,000 m3) of leaked oil 810,000 barrels (34 million US gal; 129,000 m3) was collected or burned while 4.1 million barrels (170 million US gal; 650,000 m3) entered the Gulf waters. 1.8 million US gallons (6,800 m3) of Corexit dispersant was applied.The spill had a strong economic impact on the Gulf Coast's economy sectors such as fishing and tourism. Environmental impact Oil spill caused damages across a range of species and habitats in the Gulf. Researchers say the oil and dispersant mixture, including PAHs, permeated the food chain through zooplankton. Toxicological effects have been documented in benthic and pelagic fish, estuarine communities, mammals, birds and turtles, deep-water corals, plankton, foraminifera, and microbial communities. Effects on different populations consist of increased mortality or as sub-lethal impairment on the organisms' ability to forage, reproduce and avoid predators. In 2013, it was reported that dolphins and other marine life continued to die in record numbers with infant dolphins dying at six times the normal rate, and half the dolphins examined in a December 2013 study were seriously ill or dying. BP said the report was "inconclusive as to any causation associated with the spill."Studies in 2013 suggested that as much as one-third of the released oil remains in the gulf. Further research suggested that the oil on the bottom of the seafloor was not degrading. Oil in affected coastal areas increased erosion due to the death of mangrove trees and marsh grass.Researchers looking at sediment, seawater, biota, and seafood found toxic compounds in high concentrations that they said was due to the added oil and dispersants. Although Gulf fisheries recovered in 2011, a 2014 study of the effects of the oil spill on bluefin tuna by researchers at Stanford University and the National Oceanic and Atmospheric Administration, published in the journal Science, found that toxins released by the oil spill sent fish into cardiac arrest. The study found that even very low concentrations of crude oil can slow the pace of fish heartbeats. BP disputed the study, which was conducted as part of the federal Natural Resource Damage Assessment process required by the Oil Pollution Act. The study also found that oil already broken down by wave action and chemical dispersants was more toxic than fresh oil. Another peer-reviewed study, released in March 2014 and conducted by 17 scientists from the United States and Australia and published in Proceedings of the National Academy of Sciences, found that tuna and amberjack that were exposed to oil from the spill developed deformities of the heart and other organs. BP responded that the concentrations of oil in the study were a level rarely seen in the Gulf, but The New York Times reported that the BP statement was contradicted by the study. Effects on human health Research discussed at a 2013 conference included preliminary results of an ongoing study being done by the National Institute for Environmental Health Sciences indicating that oil spill cleanup workers carry biomarkers of chemicals contained in the spilled oil and the dispersants used. A separate study is following the health issues of women and children affected by the spill. Several studies found that a "significant percentage" of Gulf residents reported mental health problems such as anxiety, depression and PTSD. According to a Columbia University study investigating the health effects among children living less than 10 miles from the coast, more than a third of the parents report physical or mental health symptoms among their children.Australia's 60 Minutes reported that people living along the gulf coast were becoming sick from the mixture of Corexit and oil. Susan Shaw, of the Deepwater Horizon oil spill Strategic Sciences Working Group, says "BP told the public that Corexit was 'as harmless as Dawn dishwashing liquid'...But BP and the EPA clearly knew about the toxicity of the Corexit long before this spill." According to Shaw, BP's own safety sheet on Corexit says that there are "high and immediate human health hazards". Cleanup workers were not provided safety equipment by the company, and the safety manuals were "rarely if ever" followed, or distributed to workers, according to a Newsweek investigation. The safety manuals read: "Avoid breathing vapor" and "Wear suitable protective clothing." Oil clean up workers reported that they were not allowed to use respirators, and that their jobs were threatened if they did.A peer-reviewed study published in The American Journal of Medicine reported significantly altered blood profiles of individuals exposed to the spilled oil and dispersants that put them at increased risk of developing liver cancer, leukemia and other disorders. BP disputed its methodology and said other studies supported its position that dispersants did not create a danger to health.In 2014, a study was published in Proceedings of the National Academy of Sciences which found heart deformities in fish exposed to oil from the spill. The researchers said that their results probably apply to humans as well as fish. Civil and criminal suits On 15 December 2010, the Department of Justice filed a civil and criminal suit against BP and other defendants for violations under the Clean Water Act in the U.S. District Court for the Eastern District of Louisiana.: 70  The case was consolidated with about 200 others, including those brought by state governments, individuals, and companies under Multi-District Litigation docket MDL No. 2179, before U.S. District Judge Carl Barbier.In November 2012, BP and the Department of Justice reached a $4 billion settlement of all federal criminal charges related to the explosion and spill. Under the settlement, BP agreed to plead guilty to 11 felony counts of manslaughter, two misdemeanors, and a felony count of lying to Congress and agreed to four years of government monitoring of its safety practices and ethics. BP also paid $525 million to settle civil charges by the Securities and Exchange Commission that it misled investors about the flow rate of oil from the well. At the same time, the US government filed criminal charges against three BP employees; two site managers were charged with manslaughter and negligence, and one former vice president with obstruction.Judge Barbier ruled in the first phase of the case that BP had committed gross negligence and that "its employees took risks that led to the largest environmental disaster in U.S. history." He apportioned fault at 67% for BP, 30% for Transocean and 3% for Halliburton. Barbier ruled that BP was "reckless" and had acted with "conscious disregard of known risks." Claims settlement In June 2010, after a meeting in the White House between President Barack Obama and BP executives, the president announced that BP would pay $20 billion into a trust fund that will be used to compensate victims of the oil spill. BP also set aside $100 million to compensate oil workers who lost their jobs because of the spill.On 2 March 2012, BP and businesses and residents affected by the spill reached a settlement of roughly 100,000 suits claiming economic losses. BP estimated that the settlement cost more than $9.2 billion.In 2015, BP and five states agreed to an $18.5 billion settlement to be used for Clean Water Act penalties and various claims. 2022 Ohio refinery fire On 20 September 2022, a fire at BP's Husky Toledo refinery caused the death of two workers there. The fire was put out that day, but the refinery remained shut down. The refinery's shutdown was expected to increase American petrol prices. Political influence Lobbying for Libyan prisoner transfer release BP lobbied the British government to conclude a prisoner-transfer agreement which the Libyan government had wanted to secure the release of Abdelbaset al-Megrahi, the only person convicted for the 1988 Lockerbie bombing over Scotland, which killed 270 people. BP stated that it pressed for the conclusion of prisoner transfer agreement amid fears that delays would damage its "commercial interests" and disrupt its £900 million offshore drilling operations in the region, but it said that it had not been involved in negotiations concerning the release of Megrahi. Political contributions and lobbying In February 2002, BP's then-chief executive, Lord Browne of Madingley, renounced the practice of corporate campaign contributions, saying: "That's why we've decided, as a global policy, that from now on we will make no political contributions from corporate funds anywhere in the world." When the Washington Post reported in June 2010 that BP North America "donated at least $4.8 million in corporate contributions in the past seven years to political groups, partisan organizations and campaigns engaged in federal and state elections", mostly to oppose ballot measures in two states aiming to raise taxes on the oil industry, the company said that the commitment had only applied to contributions to individual candidates.During the 2008 U.S. election cycle, BP employees contributed to various candidates, with Barack Obama receiving the largest amount of money, broadly in line with contributions from Shell and Chevron, but significantly less than those of Exxon Mobil.In 2009, BP spent nearly $16 million lobbying the U.S. Congress. In 2011, BP spent a total of $8,430,000 on lobbying and had 47 registered lobbyists. Oman 1954 War In 1937, Iraq Petroleum Company (IPC), 23.75% owned by BP, signed an oil concession agreement with the Sultan of Muscat. In 1952, IPC offered financial support to raise an armed force that would assist the Sultan in occupying the interior region of Oman, an area that geologists believed to be rich in oil. This led to the 1954 outbreak of Jebel Akhdar War in Oman that lasted for more than 5 years. Market manipulation investigations and sanctions The US Justice Department and the Commodity Futures Trading Commission filed charges against BP Products North America Inc. (subsidiary of BP plc) and several BP traders, alleging they conspired to raise the price of propane by seeking to corner the propane market in 2004. In 2006, one former trader pleaded guilty. In 2007, BP paid $303 million in restitution and fines as part of an agreement to defer prosecution. BP was charged with cornering and manipulating the price of TET propane in 2003 and 2004. BP paid a $125 million civil monetary penalty to the CFTC, established a compliance and ethics program, and installed a monitor to oversee BP's trading activities in the commodities markets. BP also paid $53 million into a restitution fund for victims, a $100 million criminal penalty, plus $25 million into a consumer fraud fund, as well as other payments. Also in 2007, four other former traders were charged. These charges were dismissed by a US District Court in 2009 on the grounds that the transactions were exempt under the Commodities Exchange Act because they didn't occur in a marketplace but were negotiated contracts among sophisticated companies. The dismissal was upheld by the Court of Appeals for the 5th Circuit in 2011.In November 2010, US regulators FERC and CFTC began an investigation of BP for allegedly manipulating the gas market. The investigation relates to trading activity that occurred in October and November 2008. At that time, CFTC Enforcement staff provided BP with a notice of intent to recommend charges of attempted market manipulation in violation of the Commodity Exchange Act. BP denied that it engaged in "any inappropriate or unlawful activity." In July 2011, the FERC staff issued a "Notice of Alleged Violations" saying it had preliminarily determined that several BP entities fraudulently traded physical natural gas in the Houston Ship Channel and Katy markets and trading points to increase the value of their financial swing spread positions.In May 2013, the European Commission started an investigation into allegations the companies reported distorted prices to the price reporting agency Platts, in order to "manipulate the published prices" for several oil and biofuel products. The investigation was dropped in December 2015 due to lack of evidence.A dataset of gasoline prices of BP, Caltex, Woolworths, Coles, and Gull from Perth gathered in the years 2001 to 2015 was used to show by statistical analysis the tacit collusion between these retailers.Documents from a 2016 bid to drill in the Great Australian Bight revealed claims by BP that a large-scale cleanup operation following a massive oil spill would bring a "welcome boost to local economies." In the same bid BP also stated that a diesel spill would be "socially acceptable" due to a lack of "unresolved stakeholder concerns."An internal email from mid 2017, was leaked in April 2018 in New Zealand. The email laid out that pricing was to be raised at certain sites in a region around Otaki in order to regain volume lost at that branch. This led to the Government asking the Commerce Commission to investigate regional prices: initial indications were that motorists were paying too much across most of the country. See also List of companies based in London Notes References Bibliography Commissioned works Other works External links Official website Business data for BP plc: BP companies grouped at OpenCorporates
plug-in hybrid
A plug-in hybrid electric vehicle (PHEV) is a hybrid electric vehicle whose battery pack can be recharged by plugging a charging cable into an external electric power source, in addition to internally by its on-board internal combustion engine-powered generator. Most PHEVs are passenger cars, but there are also PHEV versions of sports cars, commercial vehicles, vans, utility trucks, buses, trains, motorcycles, mopeds, military vehicles and boats.Similar to all-electric vehicles (BEVs), PHEVs displace greenhouse gas emissions from the car tailpipe exhaust to the power station generators powering the electricity grid. These centralized generators may be of renewable energy (e.g. solar, wind or hydroelectric) and largely emission-free, or have an overall lower emission intensity than individual internal combustion engines. Compared to conventional hybrid electric vehicles (HEVs), PHEVs have a larger battery pack that can be charged from the power grid, which is also more efficient and can cost less than using only the on-board generator, and also often have a more powerful electric output capable of longer and more frequent EV mode driving, helping to reduce operating costs. A PHEV's battery pack is smaller than all-electric vehicles for the same vehicle weight (due to the necessity to still accommodate its combustion engine and hybrid drivetrain), but has the auxiliary option of switching back to using its gasoline/diesel engine like a conventional HEV if the battery runs low, alleviating range anxiety especially for places that lack sufficient charging infrastructure. Mass-produced PHEVs have been available to the public in China and the United States since 2010, with the introduction of the Chevrolet Volt which was the best selling PHEV until the end of production in 2019. By the end of 2017, there were over 40 models of highway-legal series-production PHEVs for retail sales, and are available mainly in China, Japan, the United States, Canada and Western Europe. The top-selling models are the Mitsubishi Outlander P-HEV, the Chevrolet Volt family and the Toyota Prius PHV.As of December 2019, the global stock of PHEVs totaled 2.4 million units, representing one-third of the stock of plug-in electric passenger cars on the world's roads. As of December 2019, China had the world's largest stock of PHEVs with 767,900 units, followed by the United States with 567,740, and the United Kingdom with 159,910. Terminology A plug-in hybrid's all-electric range is designated by PHEV-[miles] or PHEV[kilometers]km in which the number represents the distance the vehicle can travel on battery power alone. For example, a PHEV-20 can travel 32 km (20 miles) without using its combustion engine, so it may also be designated as a PHEV32km.For these cars to be battery operated, they go through charging processes that use different currents. These currents are known as Alternating Current (AC) used for on board chargers and Direct Current (DC) used for external charging.Other popular terms sometimes used for plug-in hybrids are "grid-connected hybrids", "Gas-Optional Hybrid Electric Vehicle" (GO-HEV) or simply "gas-optional hybrids". GM calls its Chevrolet Volt series plug-in hybrid an "Extended-Range Electric Vehicle". History Invention and early interest The Lohner–Porsche Mixte Hybrid, produced as early as 1899, was the first hybrid electric car. Early hybrids could be charged from an external source before operation. However, the term "plug-in hybrid" has come to mean a hybrid vehicle that can be charged from a standard electrical wall socket. The term "plug-in hybrid electric vehicle" was coined by UC Davis Professor Andrew Frank, who has been called the "father of the modern plug-in hybrid".The July 1969 issue of Popular Science featured an article on the General Motors XP-883 plug-in hybrid. The concept commuter vehicle housed six 12-volt lead–acid batteries in the trunk area and a transverse-mounted DC electric motor turning a front-wheel drive. The car could be plugged into a standard North American 120 volt AC outlet for recharging. Revival of interest In 2003, Renault began selling the Elect'road, a plug-in series hybrid version of their popular Kangoo, in Europe. In addition to its engine, it could be plugged into a standard outlet and recharged to 95% range in about 4 hours. After selling about 500 vehicles, primarily in France, Norway and the UK, the Elect'road was redesigned in 2007.With the availability of hybrid vehicles and the rising gas prices in the United States starting around 2004, interest in plug-in hybrids increased. Some plug-in hybrids were conversions of existing hybrids; for example, the 2004 CalCars conversion of a Prius to add lead acid batteries and a range of up to 15 km (9 mi) using only electric power.In 2006, both Toyota and General Motors announced plans for plug-in hybrids. GM's Saturn Vue project was cancelled, but the Toyota plug-in was certified for road use in Japan in 2007.In 2007, Quantum Technologies and Fisker Coachbuild, LLC announced the launch of a joint venture in Fisker Automotive. Fisker intended to build a US$80,000 luxury PHEV-50, the Fisker Karma, initially scheduled for late 2009.In 2007, Aptera Motors announced their Typ-1 two-seater. However, the company folded in December 2011.In 2007, Chinese car manufacturer BYD Auto, owned by China's largest mobile phone battery maker, announced it would be introducing a production PHEV-60 sedan in China in the second half of 2008. BYD exhibited it in January 2008 at the North American International Auto Show in Detroit. Based on BYD's midsize F6 sedan, it uses lithium iron phosphate (LiFeP04)-based batteries instead of lithium-ion, and can be recharged to 70% of capacity in 10 minutes. In 2007 Ford delivered the first Ford Escape Plug-in Hybrid of a fleet of 20 demonstration PHEVs to Southern California Edison. As part of this demonstration program Ford also developed the first flexible-fuel plug-in hybrid SUV, which was delivered in June 2008. This demonstration fleet of plug-ins has been in field testing with utility company fleets in the U.S. and Canada, and during the first two years since the program began, the fleet has logged more than 75,000 miles. In August 2009 Ford delivered the first Escape Plug-in equipped with intelligent vehicle-to-grid (V2G) communications and control system technology, and Ford plans to equip all 21 plug-in hybrid Escapes with the vehicle-to-grid communications technology. Sales of the Escape PHEV were scheduled for 2012.On January 14, 2008, Toyota announced they would start sales of lithium-ion battery PHEVs by 2010, but later in the year Toyota indicated they would be offered to commercial fleets in 2009.On March 27, the California Air Resources Board (CARB) modified their regulations, requiring automobile manufacturers to produce 58,000 plug-in hybrids during 2012 through 2014. This requirement is an asked-for alternative to an earlier mandate to produce 25,000 pure zero-emissions vehicles, reducing that requirement to 5,000. On June 26, Volkswagen announced that they would be introducing production plug-ins based on the Golf compact. Volkswagen uses the term 'TwinDrive' to denote a PHEV. In September, Mazda was reported to be planning PHEVs. On September 23, Chrysler announced that they had prototyped a plug-in Jeep Wrangler and a Chrysler Town and Country mini-van, both PHEV-40s with series powertrains, and an all-electric Dodge sports car, and said that one of the three vehicles would go into production.On October 3, the U.S. enacted the Energy Improvement and Extension Act of 2008. The legislation provided tax credits for the purchase of plug-in electric vehicles of battery capacity over 4 kilowatt-hours. The federal tax credits were extended and modified by the American Clean Energy and Security Act of 2009, but now the battery capacity must be over 5 kWh and the credit phases out after the automaker has sold at least 200,000 vehicles in the U.S. Series production On December 15, 2008, BYD Auto began selling its F3DM in China, becoming the first production plug-in hybrid sold in the world, though initially was available only for corporate and government customers. Sales to the general public began in Shenzhen in March 2010, but because the F3DM nearly doubles the price of cars that run on conventional fuel, BYD expects subsidies from the local government to make the plug-in affordable to personal buyers. Toyota tested 600 pre-production Prius Plug-ins in Europe and North America in 2009 and 2010.Volvo Cars built two demonstration versions of Volvo V70 Plug-in Hybrids in 2009 but did not proceed with production. The V60 plug-in hybrid was released in 2011 and was available for sale. In October 2010 Lotus Engineering unveiled the Lotus CityCar, a plug-in series hybrid concept car designed for flex-fuel operation on ethanol, or methanol as well as regular gasoline. The lithium battery pack provides an all-electric range of 60 kilometres (37 mi), and the 1.2-liter flex-fuel engine kicks in to allow to extend the range to more than 500 kilometres (310 mi).GM officially launched the Chevrolet Volt in the U.S. on November 30, 2010, and retail deliveries began in December 2010. Its sibling the Opel/Vauxhall Ampera was launched in Europe between late 2011 and early 2012. The first deliveries of the Fisker Karma took place in July 2011, and deliveries to retail customers began in November 2011. The Toyota Prius Plug-in Hybrid was released in Japan in January 2012, followed by the United States in February 2012. Deliveries of the Prius PHV in Europe began in late June 2012. The Ford C-Max Energi was released in the U.S. in October 2012, the Volvo V60 Plug-in Hybrid in Sweden by late 2012.The Honda Accord Plug-in Hybrid was released in selected U.S. markets in January 2013, and the Mitsubishi Outlander P-HEV in Japan in January 2013, becoming the first SUV plug-in hybrid in the market. Deliveries of the Ford Fusion Energi began in February 2013. BYD Auto stopped production of its BYD F3DM due to low sales, and its successor, the BYD Qin, began sales in Costa Rica in November 2013, with sales in other countries in Latin America scheduled to begin in 2014. Qin deliveries began in China in mid December 2013. Deliveries to retail customers of the limited edition McLaren P1 supercar began in the UK in October 2013, and the Porsche Panamera S E-Hybrid began deliveries in the U.S. in November 2013. The first retail deliveries of the Cadillac ELR took place in the U.S. in December 2013. The BMW i8 and the limited edition Volkswagen XL1 were released to retail customers in Germany in June 2014. The Porsche 918 Spyder was also released in Europe and the U.S. in 2014. The first units of the Audi A3 Sportback e-tron and Volkswagen Golf GTE were registered in Germany in August 2014.In December 2014 BMW announced the group is planning to offer plug-in hybrid versions of all its core-brand models using eDrive technology developed for its BMW i brand plug-in vehicles (BMW i3 and BMW i8). The goal of the company is to use plug-in technology to continue offering high performance vehicles while reducing CO2 emissions below 100g/km. At the time of the announcement the carmaker was already testing a BMW 3 Series plug-in hybrid prototype. The first model available for retail sales will be the 2016 BMW X5 eDrive, with the production version unveiled at the 2015 Shanghai Motor Show. The second generation Chevrolet Volt was unveiled at the January 2015 North American International Auto Show, and retail deliveries began in the U.S. and Canada in October 2015.In March 2015 Audi said they planned on making a plug-in hybrid version of every model series, and that they expect plug-in hybrids, together with natural gas vehicles and battery-electric drive systems, to have a key contribution in achieving the company's CO2 targets. The Audi Q7 e-tron will follow the A3 e-tron already in the market. Also in March 2015, Mercedes-Benz announced that the company's main emphasis regarding alternative drives in the next years will be on plug-in hybrids. The carmaker plans to introduce 10 new plug-in hybrid models by 2017, and its next release was the Mercedes-Benz C 350 e, Mercedes' second plug-in hybrid after the S 500 Plug-In Hybrid. Other plug-in hybrid released in 2015 are the BYD Tang, Volkswagen Passat GTE, Volvo XC90 T8, and the Hyundai Sonata PHEV. Global combined Volt/Ampera family sales passed the 100,000 unit milestone in October 2015. By the end of 2015, over 517,000 highway legal plug-in hybrid electric cars have been sold worldwide since December 2008 out of total global sales of more than 1.25 million light-duty plug-in electric cars. In February 2016, BMW announced the introduction of the "iPerformance" model designation, which will be given to all BMW plug-in hybrid vehicles from July 2016. The aim is to provide a visible indicator of the transfer of technology from BMW i to the BMW core brand. The new designation will be used first on the plug-in hybrid variants of the new BMW 7 Series, the BMW 740e iPerformance, and the 3 Series, the BMW 330e iPerformance.Hyundai Motor Company made the official debut of its three model Hyundai Ioniq line-up at the 2016 Geneva Motor Show. The Ioniq family of electric drive vehicles includes the Ioniq Plug-in, which is expected to achieve a fuel economy of 125 mpg‑e (27 kW⋅h/100 mi; 16.8 kW⋅h/100 km) in all-electric mode. The Ioniq Plug-in is scheduled to be released in the U.S. in the fourth quarter of 2017.The second generation Prius plug-in hybrid, called Prius Prime in the U.S. and Prius PHV in Japan, was unveiled at the 2016 New York International Auto Show. Retail deliveries of the Prius Prime began in the U.S. in November 2016, and is scheduled to be released Japan by the end of 2016. The Prime has an EPA-rated all-electric range of 25 mi (40 km), over twice the range of the first generation model, and an EPA rated fuel economy of 133 mpg‑e (25.3 kW⋅h/100 mi) in all-electric mode (EV mode), the highest MPGe rating in EV mode of any vehicle rated by EPA. Unlike its predecessor, the Prime runs entirely on electricity in EV mode. Global sales of the Mitsubishi Outlander P-HEV passed the 100,000 unit milestone in March 2016. BYD Qin sales in China reached the 50,000 unit milestone in April 2016, becoming the fourth plug-in hybrid to pass that mark.In June 2016, Nissan announced it will introduce a compact range extender car in Japan before March 2017. The series plug-in hybrid will use a new hybrid system, dubbed e-Power, which debuted with the Nissan Gripz concept crossover showcased at the 2015 Frankfurt Auto Show.In January 2016, Chrysler debuted its plug-in hybrid minivan, the Chrysler Pacifica Hybrid, with an EPA rated electric-only range of 48 km (30 miles). This was the first hybrid minivan of any type. It was first sold in the United States, Canada, and Mexico in 2017. In December 2017, Honda began retail deliveries of the Honda Clarity Plug-In Hybrid in the United States and Canada, with an EPA rated electric-only range of 76 km (47 miles). In 2013, Volkswagen started production on the Volkswagen XL1, a two-person limited production diesel-powered plug-in hybrid vehicle designed to be able to travel 100 km/L (280 mpg‑imp; 235 mpg‑US) on diesel, while still being both roadworthy and practical. The model is built with a 800 cc (49 cu in) TDI twin-cylinder, common-rail 35 kW (47 hp) turbo-diesel and a 20 kW (27 hp) electric motor. The model is unique in that it is one of the only mass produced plug-in diesel hybrid vehicles and one of the only mass produced diesel hybrid vehicles in general. Technology Powertrains PHEVs are based on the same three basic powertrain architectures of conventional hybrids; a series hybrid is propelled by electric motors only, a parallel hybrid is propelled both by its internal combustion engine and by electric motors operating concurrently, and a series-parallel hybrid operates in either mode. While a plain hybrid vehicle charges its battery from its engine only, a plug-in hybrid can obtain a significant amount of the energy required to recharge its battery from external sources. Charging systems The battery charger can be on-board or external to the vehicle. The process for an on-board charger is best explained as AC power being converted into DC power, resulting in the battery being charged. On-board chargers are limited in capacity by their weight and size, and by the limited capacity of general-purpose AC outlets. Dedicated off-board chargers can be as large and powerful as the user can afford, but require returning to the charger; high-speed chargers may be shared by multiple vehicles. Using the electric motor's inverter allows the motor windings to act as the transformer coils, and the existing high-power inverter as the AC-to-DC charger. As these components are already required on the car, and are designed to handle any practical power capability, they can be used to create a very powerful form of on-board charger with no significant additional weight or size. AC Propulsion uses this charging method, referred to as "reductive charging". Modes of operation A plug-in hybrid operates in charge-depleting and charge-sustaining modes. Combinations of these two modes are termed blended mode or mixed-mode. These vehicles can be designed to drive for an extended range in all-electric mode, either at low speeds only or at all speeds. These modes manage the vehicle's battery discharge strategy, and their use has a direct effect on the size and type of battery required:Charge-depleting mode allows a fully charged PHEV to operate exclusively (or depending on the vehicle, almost exclusively, except during hard acceleration) on electric power until its battery state of charge is depleted to a predetermined level, at which time the vehicle's internal combustion engine or fuel cell will be engaged. This period is the vehicle's all-electric range. This is the only mode that a battery electric vehicle can operate in, hence their limited range.Mixed mode describes a trip using a combination of multiple modes. For example, a car may begin a trip in low speed charge-depleting mode, then enter onto a freeway and operate in blended mode. The driver might exit the freeway and drive without the internal combustion engine until all-electric range is exhausted. The vehicle can revert to a charge sustaining-mode until the final destination is reached. This contrasts with a charge-depleting trip which would be driven within the limits of a PHEV's all-electric range. Most PHEV's also have two additional charge sustaining modes: Battery Hold; the electric motor is locked out and the vehicle operates exclusively on combustion power, so that whatever charge is left in the battery remains for when mixed mode or full electric operation are re-engaged, whilst regenerative braking will still be available to boost the battery charge. On some PHEVs, vehicle services which use the traction battery (such as heating and air conditioning) are placed in a low power consumption mode to further conserve the remaining battery charge. The lock-out of the electric motor is automatically overridden (charge permitting) should full acceleration be required. Self Charge; the electric motor's armature is engaged to the transmission, but is connected to the battery so that it runs as a generator and therefore recharges the battery whilst the car is in motion, although this comes at the expense of higher fuel consumption, as the combustion engine has to both power the vehicle itself and charge the battery. This is useful for 'charging on the move' when there are limited places to plug the vehicle in. Electric power storage The optimum battery size varies depending on whether the aim is to reduce fuel consumption, running costs, or emissions, but a 2009 study concluded that "The best choice of PHEV battery capacity depends critically on the distance that the vehicle will be driven between charges. Our results suggest that for urban driving conditions and frequent charges every 10 miles or less, a low-capacity PHEV sized with an AER (all-electric range) of about 7 miles would be a robust choice for minimizing gasoline consumption, cost, and greenhouse gas emissions. For less frequent charging, every 20–100 miles, PHEVs release fewer GHGs, but HEVs are more cost effective." PHEVs typically require deeper battery charging and discharging cycles than conventional hybrids. Because the number of full cycles influences battery life, this may be less than in traditional HEVs which do not deplete their batteries as fully. However, some authors argue that PHEVs will soon become standard in the automobile industry. Design issues and trade-offs against battery life, capacity, heat dissipation, weight, costs, and safety need to be solved. Advanced battery technology is under development, promising greater energy densities by both mass and volume, and battery life expectancy is expected to increase.The cathodes of some early 2007 lithium-ion batteries are made from lithium-cobalt metal oxide. This material is expensive, and cells made with it can release oxygen if overcharged. If the cobalt is replaced with iron phosphates, the cells will not burn or release oxygen under any charge. At early 2007 gasoline and electricity prices, the break-even point is reached after six to ten years of operation. The payback period may be longer for plug-in hybrids, because of their larger, more expensive batteries.Nickel–metal hydride and lithium-ion batteries can be recycled; Toyota, for example, has a recycling program in place under which dealers are paid a US$200 credit for each battery returned. However, plug-in hybrids typically use larger battery packs than comparable conventional hybrids, and thus require more resources. Pacific Gas and Electric Company (PG&E) has suggested that utilities could purchase used batteries for backup and load leveling purposes. They state that while these used batteries may be no longer usable in vehicles, their residual capacity still has significant value. More recently, General Motors (GM) has said it has been "approached by utilities interested in using recycled Volt batteries as a power storage system, a secondary market that could bring down the cost of the Volt and other plug-in vehicles for consumers".Ultracapacitors (or "supercapacitors") are used in some plug-in hybrids, such as AFS Trinity's concept prototype, to store rapidly available energy with their high power density, in order to keep batteries within safe resistive heating limits and extend battery life. The CSIRO's UltraBattery combines a supercapacitor and a lead acid battery in a single unit, creating a hybrid car battery that lasts longer, costs less and is more powerful than current technologies used in plug-in hybrid electric vehicles (PHEVs). Conversions of production vehicles There are several companies that are converting fossil fuel non-hybrid vehicles to plug-in hybrids:Aftermarket conversion of an existing production hybrid to a plug-in hybrid typically involves increasing the capacity of the vehicle's battery pack and adding an on-board AC-to-DC charger. Ideally, the vehicle's powertrain software would be reprogrammed to make full use of the battery pack's additional energy storage capacity and power output. Many early plug-in hybrid electric vehicle conversions have been based on the Toyota Prius. Some of the systems have involved replacement of the vehicle's original NiMH battery pack and its electronic control unit. Others add an additional battery back onto the original battery pack. Target market In recent years, demand for all- electric vehicles, especially in the United States market, has been driven by government incentives through subsidies, lobbyists, and taxes. In particular, American sales of the Nissan Leaf have depended on generous incentives and special treatment in the state of Georgia, the top selling Leaf market. According to international market research, 60% of respondents believe a battery driving range of less than 160 km (99 mi) is unacceptable even though only 2% drive more than that distance per day. Among popular current all-electric vehicles, only the Tesla (with the most expensive version of the Model S offering a 265 miles (426 km) range in the U.S. Environmental Protection Agency 5-cycle test) significantly exceeds this threshold. In 2021, for the 2022 model year, the Nissan Leaf has an EPA rated range of 212 miles (341 km) for the 60 kWh model. Plug-in hybrids provide the extended range and potential for refueling of conventional hybrids while enabling drivers to use battery electric power for at least a significant part of their typical daily driving. The average trip to or from work in the United States in 2009 was 11.8 miles (19.0 km), while the average distance commuted to work in England and Wales in 2011 was slightly lower at 9.3 miles (15 km). Since building a PHEV with a longer all-electric range adds weight and cost, and reduces cargo and/or passenger space, there is not a specific all-electric range that is optimal. The accompanying graph shows the observed all-electric range, in miles, for four popular U.S. market plug-in hybrids, as tested by Popular Mechanics magazine.A key design parameter of the Chevrolet Volt was a target of 40 miles (64 km) for the all-electric range, selected to keep the battery size small and lower costs, and mainly because research showed that 78% of daily commuters in the U.S. travel 40 mi (64 km) or less. This target range would allow most travel to be accomplished electrically driven and the assumption was made that charging will take place at home overnight. This requirement translated using a lithium-ion battery pack with an energy storage capacity of 16 kWh considering that the battery would be used until the state of charge (SOC) of the battery reached 30%.In October 2014 General Motors reported, based on data collected through its OnStar telematics system since Volt deliveries began, and with over 1 billion miles (1.6 billion km) traveled, that Volt owners drive about 62.5% of their trips in all-electric mode. In May 2016, Ford reported, based on data collected from more than 610 million miles (976 million km) logged by its electrified vehicles through its telematics system, that drivers of these vehicles run an average of 13,500 mi (21,700 km) annually on their vehicles, with about half of those miles operating in all-electric mode. A break down of these figures show an average daily commute of 42 mi (68 km) for Ford Energi plug-in hybrid drivers. Ford notes that with the enhanced electric range of the 2017 model year model, the average Fusion Energi commuter could go the entire day using no gasoline, if the car is fully charged both, before leaving for work and before leaving for home. According to Ford data, currently most customers are likely charging their vehicles only at home.The 2015 edition of the EPA's annual report "Light-Duty Automotive Technology, Carbon Dioxide Emissions, and Fuel Economy Trends" estimates the following utility factors for 2015 model year plug-in hybrids to represent the percentage of miles that will be driven using electricity by an average driver, whether in electric only or blended modes: 83% for the BMW i3 REx, 66% for the Chevrolet Volt, 45% for the Ford Energi models, 43% for the McLaren P1, 37% for the BMW i8, and 29% for the Toyota Prius PHV. A 2014 analysis conducted by the Idaho National Laboratory using a sample of 21,600 all-electric cars and plug-in hybrids, found that Volt owners traveled on average 9,112 miles in all-electric mode (e-miles) per year, while Leaf owners traveled 9,697 e-miles per year, despite the Volt's shorter all-electric range, about half of the Leaf's.Between January and August 2014, a period during which US sales of conventional hybrids slowed, US sales of plug-in hybrids grew from 28,241 to 40,748 compared to the same period in 2013. US sales of all-electric vehicles also grew during the same period: from 29,917 vehicles in the January to August 2013 period to 40,349 in January to August 2014. Comparison to non-plug-in hybrids Fuel efficiency and petroleum displacement Plug-in hybrids have the potential to be even more efficient than conventional hybrids because a more limited use of the PHEV's internal combustion engine may allow the engine to be used at closer to its maximum efficiency. While a Toyota Prius is likely to convert fuel to motive energy on average at about 30% efficiency (well below the engine's 38% peak efficiency), the engine of a PHEV-70 would be likely to operate far more often near its peak efficiency because the batteries can serve the modest power needs at times when the combustion engine would be forced to run well below its peak efficiency. The actual efficiency achieved depends on losses from electricity generation, inversion, battery charging/discharging, the motor controller and motor itself, the way a vehicle is used (its duty cycle), and the opportunities to recharge by connecting to the electrical grid. Each kilowatt hour of battery capacity in use will displace up to 50 U.S. gallons (190 L; 42 imp gal) of petroleum fuels per year (gasoline or diesel). Also, electricity is multi-sourced and, as a result, it gives the greatest degree of energy resilience.The actual fuel economy for PHEVs depends on their powertrain's operating modes, the all-electric range, and the amount of driving between charges. If no gasoline is used the miles per gallon gasoline equivalent (MPG-e) depends only on the efficiency of the electric system. The first mass production PHEV available in the U.S. market, the 2011 Chevrolet Volt, with an EPA rated all-electric range of 35 mi (56 km) and an additional gasoline-only extended range of 344 mi (554 km), has an EPA combined city/highway fuel economy of 93 MPG-e in all-electric mode, and 37 mpg‑US (6.4 L/100 km; 44 mpg‑imp) in gasoline-only mode, for an overall combined gas-electric fuel economy rating of 60 mpg‑US (3.9 L/100 km; 72 mpg‑imp) equivalent (MPG-e). The EPA also included in the Volt's fuel economy label a table showing fuel economy and electricity consumed for five different scenarios: 30, 45, 60 and 75 mi (121 km) driven between a full charge, and a never charge scenario. According to this table the fuel economy goes up to 168 mpg‑US (1.40 L/100 km; 202 mpg‑imp) equivalent (MPG-e) with 45 mi (72 km) driven between full charges.For the more comprehensive fuel economy and environment label that will be mandatory in the U.S. beginning in model year 2013, the National Highway Traffic Safety Administration (NHTSA) and Environmental Protection Agency (EPA) issued two separate fuel economy labels for plug-in hybrids because of their design complexity, as PHEVS can operate in two or three operating modes: all-electric, blended, and gasoline-only. One label is for series hybrid or extended range electric vehicle (like the Chevy Volt), with all-electric and gasoline-only modes; and a second label for blended mode or series-parallel hybrid, that includes a combination of both gasoline and plug-in electric operation; and gasoline only, like a conventional hybrid vehicle.The Society of Automotive Engineers (SAE) developed their recommended practice in 1999 for testing and reporting the fuel economy of hybrid vehicles and included language to address PHEVs. An SAE committee is currently working to review procedures for testing and reporting the fuel economy of PHEVs. The Toronto Atmospheric Fund tested ten retrofitted plug-in hybrid vehicles that achieved an average of 5.8 litres per 100 kilometre or 40.6 miles per gallon over six months in 2008, which was considered below the technology's potential.In real world testing using normal drivers, some Prius PHEV conversions may not achieve much better fuel economy than HEVs. For example, a plug-in Prius fleet, each with a 30 miles (48 km) all-electric range, averaged only 51 mpg‑US (4.6 L/100 km; 61 mpg‑imp) in a 17,000-mile (27,000 km) test in Seattle, and similar results with the same kind of conversion battery models at Google's RechargeIT initiative. Moreover, the additional battery pack costs US$10,000–US$11,000. Operating costs A study published in 2014 by researchers from Lamar University, Iowa State University and Oak Ridge National Laboratory compared the operating costs of PHEVs of various electric ranges (10, 20, 30, and 40 miles) with conventional gasoline vehicles and non-plugin hybrid-electric vehicles (HEVs) for different payback periods, considering different charging infrastructure deployment levels and gasoline prices. The study concluded that: PHEVs save around 60% or 40% in energy costs, compared with conventional gasoline vehicles and HEVs, respectively. However, for drivers with significant daily vehicle miles traveled (DVMT), hybrid vehicles may be even a better choice than plug-in hybrids with a range of 40 mi (64 km), particularly when there is a lack of public charging infrastructure. The incremental battery cost of large-battery plug-in hybrids is difficult to justify based on the incremental savings of PHEVs' operating costs unless a subsidy is offered for large-battery PHEVs. When the price of gasoline increases from US$4 per gallon to US$5 per gallon, the number of drivers who benefit from a larger battery increases significantly. If the gas price is US$3, a plug-in hybrid with a range of 10 mi (16 km) is the least costly option even if the battery cost is $200/kWh. Although quick chargers can reduce charging time, they contribute little to energy cost savings for PHEVs, as opposed to Level-2 chargers. Cost of batteries Disadvantages of PHEVs include the additional cost, weight and size of a larger battery pack. According to a 2010 study by the National Research Council, the cost of a lithium-ion battery pack is about US$1,700/kW·h of usable energy, and considering that a PHEV-10 requires about 2.0 kW·h and a PHEV-40 about 8 kW·h, the estimated manufacturer cost of the battery pack for a PHEV-10 is around US$3,000 and it goes up to US$14,000 for a PHEV-40. According to the same study, even though costs are expected to decline by 35% by 2020, market penetration is expected to be slow and therefore PHEVs are not expected to significantly impact oil consumption or carbon emissions before 2030, unless a fundamental breakthrough in battery technologies occurs. According to the 2010 NRC study, although a mile driven on electricity is cheaper than one driven on gasoline, lifetime fuel savings are not enough to offset plug-ins' high upfront costs, and it will take decades before the break-even point is achieved. Furthermore, hundreds of billions of dollars in government subsidies and incentives are likely to be required to achieve rapid plug-in market penetration in the U.S.A 2013 study by the American Council for an Energy-Efficient Economy reported that battery costs came down from US$1,300 per kilowatt hour in 2007 to US$500 per kilowatt hour in 2012. The U.S. Department of Energy has set cost targets for its sponsored battery research of US$300 per kilowatt hour in 2015 and US$125 per kilowatt hour by 2022. Cost reductions through advances in battery technology and higher production volumes will allow plug-in electric vehicles to be more competitive with conventional internal combustion engine vehicles.A study published in 2011 by the Belfer Center, Harvard University, found that the gasoline costs savings of PHEVs over the vehicles' lifetimes do not offset their higher purchase prices. This finding was estimated comparing their lifetime net present value at 2010 purchase and operating costs for the U.S. market, and assuming no government subidies. According to the study estimates, a PHEV-40 is US$5,377 more expensive than a conventional internal combustion engine, while a battery electric vehicle (BEV) is US$4,819 more expensive. The study also examined how this balance will change over the next 10 to 20 years, assuming that battery costs will decrease while gasoline prices increase. Under the future scenarios considered, the study found that BEVs will be significantly less expensive than conventional cars (US$1,155 to US$7,181 cheaper), while PHEVs, will be more expensive than BEVs in almost all comparison scenarios, and only less expensive than conventional cars in a scenario with very low battery costs and high gasoline prices. BEVs are simpler to build and do not use liquid fuel, while PHEVs have more complicated powertrains and still have gasoline-powered engines. Emissions shifted to electric plants Increased pollution is expected to occur in some areas with the adoption of PHEVs, but most areas will experience a decrease. A study by the ACEEE predicts that widespread PHEV use in heavily coal-dependent areas would result in an increase in local net sulfur dioxide and mercury emissions, given emissions levels from most coal plants currently supplying power to the grid. Although clean coal technologies could create power plants which supply grid power from coal without emitting significant amounts of such pollutants, the higher cost of the application of these technologies may increase the price of coal-generated electricity. The net effect on pollution is dependent on the fuel source of the electrical grid (fossil or renewable, for example) and the pollution profile of the power plants themselves. Identifying, regulating and upgrading single point pollution source such as a power plant—or replacing a plant altogether—may also be more practical. From a human health perspective, shifting pollution away from large urban areas may be considered a significant advantage.According to a 2009 study by The National Academy of Science, "Electric vehicles and grid-dependent (plug-in) hybrid vehicles showed somewhat higher nonclimate damages than many other technologies." Efficiency of plug-in hybrids is also impacted by the overall efficiency of electric power transmission. Transmission and distribution losses in the USA were estimated at 7.2% in 1995 and 6.5% in 2007. By life cycle analysis of air pollution emissions, natural gas vehicles are currently the lowest emitter. Tiered rate structure for electric bills The additional electrical consumption to recharge the plug-in vehicles could push many households in areas that do not have off-peak tariffs into the higher priced tier and negate financial benefits. Customers under such tariffs could see significant savings by being careful about when the vehicle was charged, for example, by using a timer to restrict charging to off-peak hours. Thus, an accurate comparison of the benefit requires each household to evaluate its current electrical usage tier and tariffs weighed against the cost of gasoline and the actual observed operational cost of electric mode vehicle operation. Greenhouse gas emissions The effect of PHEVs on greenhouse emissions is complex. Plug-in hybrid vehicles operating on all-electric mode do not emit harmful tailpipe pollutants from the onboard source of power. The clean air benefit is usually local because depending on the source of the electricity used to recharge the batteries, air pollutant emissions are shifted to the location of the generation plants. In the same way, PHEVs do not emit greenhouse gases from the onboard source of power, but from the point of view of a well-to-wheel assessment, the extent of the benefit also depends on the fuel and technology used for electricity generation. From the perspective of a full life cycle analysis, the electricity used to recharge the batteries must be generated from zero-emission sources such as renewable (e.g. wind power, solar energy or hydroelectricity) or nuclear power for PEVs to have almost none or zero well-to-wheel emissions. On the other hand, when PEVs are recharged from coal-fired plants, they usually produce slightly more greenhouse gas emissions than internal combustion engine vehicles. In the case of plug-in hybrid electric vehicle when operating in hybrid mode with assistance of the internal combustion engine, tailpipe and greenhouse emissions are lower in comparison to conventional cars because of their higher fuel economy. Life cycle energy and emissions assessments Argonne In 2009, researchers at Argonne National Laboratory adapted their GREET model to conduct a full well-to-wheels (WTW) analysis of energy use and greenhouse gas (GHG) emissions of plug-in hybrid electric vehicles for several scenarios, considering different on-board fuels and different sources of electricity generation for recharging the vehicle batteries. Three US regions were selected for the analysis, California, New York, and Illinois, as these regions include major metropolitan areas with significant variations in their energy generation mixes. The full cycle analysis results were also reported for the US generation mix and renewable electricity to examine cases of average and clean mixes, respectively This 2009 study showed a wide spread of petroleum use and GHG emissions among the different fuel production technologies and grid generation mixes. The following table summarizes the main results: The Argonne study found that PHEVs offered reductions in petroleum energy use as compared with regular hybrid electric vehicles. More petroleum energy savings and also more GHG emissions reductions were realized as the all-electric range increased, except when electricity used to recharge was dominated by coal or oil-fired power generation. As expected, electricity from renewable sources realized the largest reductions in petroleum energy use and GHG emissions for all PHEVs as the all-electric range increased. The study also concluded that plug-in vehicles that employ biomass-based fuels (biomass-E85 and -hydrogen) may not realize GHG emissions benefits over regular hybrids if power generation is dominated by fossil sources. Oak Ridge A 2008 study by researchers at Oak Ridge National Laboratory analyzed oil use and greenhouse gas (GHG) emissions of plug-in hybrids relative to hybrid electric vehicles under several scenarios for years 2020 and 2030. The study considered the mix of power sources for 13 U.S. regions that would be used during recharging of vehicles, generally a combination of coal, natural gas and nuclear energy, and to a lesser extent renewable energy. A 2010 study conducted at Argonne National Laboratory reached similar findings, concluding that PHEVs will reduce oil consumption but could produce very different greenhouse gas emissions for each region depending on the energy mix used to generate the electricity to recharge the plug-in hybrids. Environmental Protection Agency In October 2014, the U.S. Environmental Protection Agency published the 2014 edition of its annual report Light-Duty Automotive Technology, Carbon Dioxide Emissions, and Fuel Economy Trends. For the first time, the report presents an analysis of the impact of alternative fuel vehicles, with emphasis in plug-in electric vehicles because as their market share is approaching 1%, PEVs began to have a measurable impact on the U.S. overall new vehicle fuel economy and CO2 emissions.EPA's report included the analysis of 12 all-electric passengers cars and 10 plug-in hybrids available in the market as model year 2014. For purposes of an accurate estimation of emissions, the analysis took into consideration the differences in operation between those PHEVs like the Chevrolet Volt that can operate in all-electric mode without using gasoline, and those that operate in a blended mode like the Toyota Prius PHV, which uses both energy stored in the battery and energy from the gasoline tank to propel the vehicle, but that can deliver substantial all-electric driving in blended mode. In addition, since the all-electric range of plug-in hybrids depends on the size of the battery pack, the analysis introduced a utility factor as a projection, on average, of the percentage of miles that will be driven using electricity (in electric only and blended modes) by an average driver. The following table shows the overall EV/hybrid fuel economy expressed in terms of miles per gallon gasoline equivalent (mpg-e) and the utility factor for the ten MY2014 plug-in hybrids available in the U.S. market. The study used the utility factor (since in pure EV mode there are no tailpipe emissions) and the EPA best estimate of the CO2 tailpipe emissions produced by these vehicles in real world city and highway operation based on the EPA 5-cycle label methodology, using a weighted 55% city/45% highway driving. The results are shown in the following table.In addition, the EPA accounted for the upstream CO2 emissions associated with the production and distribution of electricity required to charge the PHEVs. Since electricity production in the United States varies significantly from region to region, the EPA considered three scenarios/ranges with the low end of the range corresponding to the California powerplant emissions factor, the middle of the range represented by the national average powerplant emissions factor, and the upper end of the range corresponding to the powerplant emissions factor for the Rockies. The EPA estimates that the electricity GHG emission factors for various regions of the country vary from 346 g CO2/kW-hr in California to 986 g CO2/kW-hr in the Rockies, with a national average of 648 g CO2/kW-hr. The following table shows the tailpipe emissions and the combined tailpipe and upstream emissions for each of the 10 MY 2014 PHEVs available in the U.S. market. National Bureau of Economic Research Most emission analysis use average emissions rates across regions instead of marginal generation at different times of the day. The former approach does not take into account the generation mix within interconnected electricity markets and shifting load profiles throughout the day. An analysis by three economist affiliated with the National Bureau of Economic Research (NBER), published in November 2014, developed a methodology to estimate marginal emissions of electricity demand that vary by location and time of day across the United States. The study used emissions and consumption data for 2007 through 2009, and used the specifications for the Chevrolet Volt (all-electric range of 35 mi (56 km)). The analysis found that marginal emission rates are more than three times as large in the Upper Midwest compared to the Western U.S., and within regions, rates for some hours of the day are more than twice those for others. Applying the results of the marginal analysis to plug-in electric vehicles, the NBER researchers found that the emissions of charging PEVs vary by region and hours of the day. In some regions, such as the Western U.S. and Texas, CO2 emissions per mile from driving PEVs are less than those from driving a hybrid car. However, in other regions, such as the Upper Midwest, charging during the recommended hours of midnight to 4 a.m. implies that PEVs generate more emissions per mile than the average car currently on the road. The results show a fundamental tension between electricity load management and environmental goals as the hours when electricity is the least expensive to produce tend to be the hours with the greatest emissions. This occurs because coal-fired units, which have higher emission rates, are most commonly used to meet base-level and off-peak electricity demand; while natural gas units, which have relatively low emissions rates, are often brought online to meet peak demand. This pattern of fuel shifting explains why emission rates tend to be higher at night and lower during periods of peak demand in the morning and evening. Production and sales Production models Since 2008, plug-in hybrids have been commercially available from both specialty manufacturers and from mainstream producers of internal combustion engine vehicles. The F3DM, released in China in December 2008, was the first production plug-in hybrid sold in the world. The Chevrolet Volt, launched in the U.S. in December 2010, was the first mass-production plug-in hybrid by a major carmaker. Sales and main markets There were 1.2 million plug-in hybrid cars on the world roads at the end of 2017. The stock of plug-in hybrids increased to 1.8 million in 2018, out of a global stock of about 5.1 million plug-in electric passenger cars. As of December 2017, the United States ranked as the world's largest plug-in hybrid car market with a stock of 360,510 units, followed by China with 276,580 vehicles, Japan with 100,860 units, the Netherlands with 98,220, and the UK with 88,660.Global sales of plug-in hybrids grew from over 300 units in 2010 to almost 9,000 in 2011, jumped to over 60,000 in 2012, and reached almost 222,000 in 2015. As of December 2015, the United States was the world's largest plug-in hybrid car market with a stock of 193,770 units. About 279,000 light-duty plug-in hybrids were sold in 2016, raising the global stock to almost 800,000 highway legal plug-in hybrid electric cars at the end of 2016. A total of 398,210 plug-in hybrid cars were sold in 2017, with China as the top selling country with 111,000 units, and the global stock of plug-in hybrids passed the one million unit milestone by the end of 2017. Global sales of plug-in electric vehicles have been shifting for several years towards fully electric battery cars. The global ratio between all-electrics (BEVs) and plug-in hybrids (PHEVs) went from 56:44 in 2012, to 60:40 in 2015, to 66:34 in 2017, and rose to 69:31 in 2018. By country The Netherlands, Sweden, the UK, and the United States have the largest shares of plug-in hybrid sales as percentage of total plug-in electric passenger vehicle sales. The Netherlands has the world's largest share of plug-in hybrids among its plug-in electric passenger car stock, with 86,162 plug-in hybrids registered at the end of October 2016, out of 99,945 plug-in electric cars and vans, representing 86.2% of the country's stock of light-duty plug-in electric vehicles.Sweden ranks next with 16,978 plug-in hybrid cars sold between 2011 and August 2016, representing 71.7% of total plug-in electric passenger car sales registrations. Plug-in hybrid registrations in the UK between up to August 2016 totaled 45,130 units representing 61.6% of total plug-in car registrations since 2011. In the United States, plug-in hybrids represent 47.2% of the 506,450 plug-in electric cars sold between 2008 and August 2016.In November 2013 the Netherlands became the first country where a plug-in hybrid topped the monthly ranking of new car sales. During November sales were led by the Mitsubishi Outlander P-HEV with 2,736 units, capturing a market share of 6.8% of new passenger cars sold that month. Again in December 2013 the Outlander P-HEV ranked as the top selling new car in the country with 4,976 units, representing a 12.6% market share of new car sales. These record sales allowed the Netherlands to become the second country, after Norway, where plug-in electric cars have topped the monthly ranking of new car sales. As of December 2013, the Netherlands was the country with highest plug-in hybrid market concentration, with 1.45 vehicles registered per 1,000 people.The following table presents the top ranking countries according to its plug-in hybrid segment market share of total new car sales in 2013: By model According to JATO Dynamics, since December 2018 the Mitsubishi Outlander P-HEV is the world's all-time best selling plug-in hybrid. Since inception, 290,000 units have been sold worldwide through September 2021. Europe is the Outlander P-HEV leading market with 126,617 units sold through January 2019, followed by Japan 42,451 units through March 2018. European sales are led by the UK with 50,000 units by April 2020, followed by the Netherlands with 25,489 units, and Norway with 14,196, both through March 2018.Combined global sales of the Chevrolet Volt and its variants totaled about 186,000 units by the end of 2018, including about 10,000 Opel/Vauxhall Amperas sold in Europe through June 2016, and over 4,300 Buick Velite 5s sold only in China (rebadged second generation Volt) through December 2018. Volt sales are led by the United States with 152,144 units delivered through December 2018, followed by Canada with 17,311 units through November 2018. Until September 2018, the Chevrolet Volt was the world's top selling plug-in hybrid.Ranking third is the Toyota Prius Plug-in Hybrid (Toyota Prius Prime) with about 174,600 units sold worldwide of both generations through December 2018. The United States is the leading market with over 93,000 units delivered through December 2018. Japan ranks next with about 61,200 units through December 2018, followed by Europe with almost 14,800 units through June 2018.The following table presents plug-in hybrid models with cumulative global sales of around or more than 100,000 units since the introduction of the first modern production plug-in hybrid car, the BYD F3DM, in 2008 up until December 2020: Government support and public deployment Subsidies and economic incentives Several countries have established grants and tax credits for the purchase of new plug-in electric vehicles (PEVs) including plug-in hybrid electric vehicles, and usually the economic incentive depends on battery size. The U.S. offers a federal income tax credit up to US$7,500, and several states have additional incentives. The UK offers a Plug-in Car Grant up to a maximum of £5,000 (US$7,600). As of April 2011, 15 of the 27 European Union member states provide tax incentives for electrically chargeable vehicles, which includes all Western European countries plus the Czech Republic and Romania. Also 17 countries levy carbon dioxide related taxes on passenger cars as a disincentive. The incentives consist of tax reductions and exemptions, as well as of bonus payments for buyers of all-electric and plug-in hybrid vehicles, hybrid vehicles, and some alternative fuel vehicles. Other government support United StatesIncentives for the development of PHEVs are included in the Energy Independence and Security Act of 2007. The Energy Improvement and Extension Act of 2008, signed into law on October 3, 2008, grants a tax credits for the purchase of PHEVs. President Barack Obama's New Energy for America calls for deployment of 1 million plug-in hybrid vehicles by 2015, and on March 19, 2009, he announced programs directing $2.4 billion to electric vehicle development.The American Recovery and Reinvestment Act of 2009 modifies the tax credits, including a new one for plug-in electric drive conversion kits and for 2 or 3 wheel vehicles. The ultimate total included in the Act that is going to PHEVs is over $6 billion.In March 2009, as part of the American Recovery and Reinvestment Act, the US Department of Energy announced the release of two competitive solicitations for up to $2 billion in federal funding for competitively awarded cost-shared agreements for manufacturing of advanced batteries and related drive components as well as up to $400 million for transportation electrification demonstration and deployment projects. This announcement will also help meet the President Barack Obama's goal of putting one million plug-in hybrid vehicles on the road by 2015. Public deployments also include: USDOE's FreedomCAR. US Department of Energy announced it would dole out $30 million in funding to three companies over three years to further the development of plug-in hybrids USDOE announced the selection of Navistar Corporation for a cost-shared award of up to $10 million to develop, test, and deploy plug-in hybrid electric (PHEV) school buses. DOE and Sweden have a MOU to advance market integration of plug-in hybrid vehicles PHEV Research Center San Francisco Mayor Gavin Newsom, San Jose Mayor Chuck Reed and Oakland, California Mayor Ron Dellums announced a nine-step policy plan for transforming the Bay Area into the "Electric Vehicle (EV) Capital of the U.S." and of the world There are partnerships with Coulomb, Better Place and others are also advancing. The first charging stations went up in San Jose (more information in Plug-in hybrids in California). Washington state PHEV Pilot Project Texas Governor Rick Perry's proposal for a state $5,000 tax credit for PHEVs in "non-attainment" communities Seattle, that includes City's public fleet converted vehicles, the Port of Seattle, King County and the Puget Sound Clean Air AgencyEuropean UnionElectrification of transport (electromobility) is a priority in the European Union Research Programme. It also figures prominently in the European Economic Recovery Plan presented November 2008, in the frame of the Green Car Initiative. DG TREN will support a large European "electromobility" project on electric vehicles and related infrastructure with a total budget of around €50 million as part of the Green Car Initiative. Supportive organizations Organizations that support plug-in hybrids include the World Wide Fund for Nature, National Wildlife Federation, and CalCars.Other supportive organizations are Plug In America, the Alliance for Climate Protection, Friends of the Earth, the Rainforest Action Network, Rocky Mountain Institute (Project Get Ready), the San Francisco Bay Area Council, the Apollo Alliance, the Set America Free Coalition, the Silicon Valley Leadership Group, and the Plug-in Hybrid Electric School Bus Project.FPL and Duke Energy has said that by 2020 all new purchases of fleet vehicles will be plug-in hybrid or all-electric. See also References Further reading American Council for an Energy-Efficient Economy, Plug-in Electric Vehicles: Challenges and Opportunities, June 2013 Argonne National Laboratory, Cradle-to-Grave Lifecycle Analysis of U.S. Light-Duty Vehicle-Fuel Pathways: A Greenhouse Gas Emissions and Economic Assessment of Current (2015) and Future (2025–2030) Technologies Archived 2020-08-12 at the Wayback Machine (includes estimated cost of avoided GHG emissions from BEVs and PHEVs), June 2016. Boschert, Sherry (2007). Plug-in Hybrids: The Cars that will Recharge America (1st ed.). New Society Publishers. ISBN 9780865715714. OCLC 74524214. International Council on Clean Transportation, Driving Electrification – A Global Comparison of Fiscal Incentive Policy for Electric Vehicles, May 2014 International Energy Agency (IEA) and Electric Vehicles Initiative (April 2013), Global EV Outlook 2013 – Understanding the Electric Vehicle Landscape to 2020 International Energy Agency (IEA) – IA-HEV (May 2013), Hybrid and Electric Vehicles – The Electric Drive Gains Traction Archived 2021-02-26 at the Wayback Machine Lee, Henry, and Grant Lovellette (2011).Will Electric Cars Transform the U.S. Vehicle Market? Belfer Center, Harvard University Nevres, Cefo (2009). Two Cents per Mile: Will President Obama Make it Happen With the Stroke of a Pen?. Nevlin. ISBN 9780615293912. OCLC 463395305. Sandalow, David B., ed. (2009). Plug-In Electric Vehicles: What Role for Washington? (1st. ed.). The Brookings Institution. ISBN 9780815703051. OCLC 895434772. Michalek, Jeremy (February 2015). "CMU team finds regional temperature differences have significant impact on EV efficiency, range and emissions". Green Car Congress. Romm, Joseph J. and Fox-Penne, Peter. (2007). Plugging into the Grid: How Plug-In Hybrid-Electric Vehicles Can Help Break America's Oil Addiction and Slow Global Warming. Progressive Policy Institute. U.S. Environmental Protection Agency, Application of Life-Cycle Assessment to Nanoscale Technology: Lithium-ion Batteries for Electric Vehicles, April 2013. US Office of Energy Efficiency and Renewable Energy Plug-In Hybrid Electric Vehicle Value Proposition Study Final Report, July 2010. Plug-in Hybrid Electric Vehicles. Alternative Fuels and Advanced Vehicles Data Center (AFDC), including list of books and publications. US National Highway Traffic Safety Administration Interim Guidance Electric and Hybrid Electric Vehicles Equipped with High Voltage Batteries – Vehicle Owner/General Public Archived 2013-12-09 at the Wayback Machine Interim Guidance Electric and Hybrid Electric Vehicles Equipped with High Voltage Batteries – Law Enforcement/Emergency Medical Services/Fire Department Archived 2013-12-09 at the Wayback Machine External links Plug In America – Non-profit advocacy group. eGallon Calculator: Compare the costs of driving with electricity. U.S. Department of Energy.
synthetic fuel
Synthetic fuel or synfuel is a liquid fuel, or sometimes gaseous fuel, obtained from syngas, a mixture of carbon monoxide and hydrogen, in which the syngas was derived from gasification of solid feedstocks such as coal or biomass or by reforming of natural gas. Common ways for refining synthetic fuels include the Fischer–Tropsch conversion, methanol to gasoline conversion, or direct coal liquefaction. Classification and principles The term 'synthetic fuel' or 'synfuel' has several different meanings and it may include different types of fuels. More traditional definitions define 'synthetic fuel' or 'synfuel' as a hydrocarbon produced by a sequence of chemical reactions to synthesize the fuel from a feedstock, typically from coal or natural gas, rather by selecting the hydrocarbon by distillation from crude oil. In its Annual Energy Outlook 2006, the Energy Information Administration defines synthetic fuels as fuels produced from coal, natural gas, or biomass feedstocks through chemical conversion into synthetic crude and/or synthetic liquid products. A number of synthetic fuel's definitions include fuels produced from biomass, and industrial and municipal waste. These definitions also allow oil sands and oil shale to be understood as synthetic fuel sources. In addition to liquid fuels, synthesized gaseous fuels are also considered to be synthetic fuels. In his 'Synthetic Fuels Handbook' petrochemist James G. Speight included liquid and gaseous fuels as well as clean solid fuels produced by conversion of coal, oil shale or tar sands, and various forms of biomass, although he admits that in the context of substitutes for petroleum-based fuels it has even wider meaning. Depending on the context, methanol, ethanol and hydrogen may also be included in this category.Synthetic fuels are produced by the chemical process of conversion. Conversion methods could be direct conversion into liquid transportation fuels, or indirect conversion, in which the source substance is converted initially into syngas which then goes through additional conversion process to become liquid fuels. Basic conversion methods include carbonization and pyrolysis, hydrogenation, and thermal dissolution. History The process of direct conversion of coal to synthetic fuel originally developed in Germany.Friedrich Bergius developed the Bergius process, which received a patent in 1913. Karl Goldschmidt invited Bergius to build an industrial plant at his factory, the Th. Goldschmidt AG (part of Evonik Industries from 2007), in 1914. Production began in 1919.Indirect coal conversion (where coal is gasified and then converted to synthetic fuels) was also developed in Germany - by Franz Fischer and Hans Tropsch in 1923. During World War II (1939-1945), Germany used synthetic-oil manufacturing (German: Kohleverflüssigung) to produce substitute (Ersatz) oil products by using the Bergius process (from coal), the Fischer–Tropsch process (water gas), and other methods (Zeitz used the TTH and MTH processes). In 1931 the British Department of Scientific and Industrial Research located in Greenwich, England, set up a small facility where hydrogen gas was combined with coal at extremely high pressures to make a synthetic fuel.The Bergius process plants became Nazi Germany's primary source of high-grade aviation gasoline, synthetic oil, synthetic rubber, synthetic methanol, synthetic ammonia, and nitric acid. Nearly one third of the Bergius production came from plants in Pölitz (Polish: Police) and Leuna, with 1/3 more in five other plants (Ludwigshafen had a much smaller Bergius plant which improved "gasoline quality by dehydrogenation" using the DHD process).Synthetic fuel grades included "T.L. [jet] fuel", "first quality aviation gasoline", "aviation base gasoline", and "gasoline - middle oil"; and "producer gas" and diesel were synthesized for fuel as well (converted armored tanks, for example, used producer gas).: 4, s2  By early 1944 German synthetic-fuel production had reached more than 124,000 barrels per day (19,700 m3/d) from 25 plants, including 10 in the Ruhr Area.: 239  In 1937 the four central Germany lignite coal plants at Böhlen, Leuna, Magdeburg/Rothensee, and Zeitz, along with the Ruhr Area bituminous coal plant at Scholven/Buer, produced 4.8 million barrels (760×10^3 m3) of fuel. Four new hydrogenation plants (German: Hydrierwerke) were subsequently erected at Bottrop-Welheim (which used "Bituminous coal tar pitch"), Gelsenkirchen (Nordstern), Pölitz, and, at 200,000 tons/yr Wesseling. Nordstern and Pölitz/Stettin used bituminous coal, as did the new Blechhammer plants. Heydebreck synthesized food oil, which was tested on concentration camp prisoners. After Allied bombing of Germany's synthetic-fuel production plants (especially in May to June 1944), the Geilenberg Special Staff used 350,000 mostly foreign forced-laborers to reconstruct the bombed synthetic-oil plants,: 210, 224  and, in an emergency decentralization program, the Mineralölsicherungsplan (1944-1945), to build 7 underground hydrogenation plants with bombing protection (none were completed). (Planners had rejected an earlier such proposal, expecting that Axis forces would win the war before the bunkers would be completed.) In July 1944 the "Cuckoo" project underground synthetic-oil plant (800,000 m2) was being "carved out of the Himmelsburg" north of the Mittelwerk, but the plant remained unfinished at the end of World War II. Production of synthetic fuel became even more vital for Nazi Germany when Soviet Red Army forces occupied the Ploiești oilfields in Romania on 24 August 1944, denying Germany access to its most important natural oil source. Indirect Fischer–Tropsch ("FT") technologies were brought to the United States after World War II, and a 7,000 barrels per day (1,100 m3/d) plant was designed by HRI and built in Brownsville, Texas. The plant represented the first commercial use of high-temperature Fischer–Tropsch conversion. It operated from 1950 to 1955, when it was shut down after the price of oil dropped due to enhanced production and huge discoveries in the Middle East.In 1949 the U.S. Bureau of Mines built and operated a demonstration plant for converting coal to gasoline in Louisiana, Missouri. Direct coal conversion plants were also developed in the US after World War II, including a 3 TPD plant in Lawrenceville, New Jersey, and a 250-600 TPD Plant in Catlettsburg, Kentucky.In later decades the Republic of South Africa established a state oil company including a large synthetic fuel establishment. Processes The numerous processes that can be used to produce synthetic fuels broadly fall into three categories: Indirect, Direct, and Biofuel processes. Indirect conversion Indirect conversion has the widest deployment worldwide, with global production totaling around 260,000 barrels per day (41,000 m3/d), and many additional projects under active development.Indirect conversion broadly refers to a process in which biomass, coal, or natural gas is converted to a mix of hydrogen and carbon monoxide known as syngas either through gasification or steam methane reforming, and that syngas is processed into a liquid transportation fuel using one of a number of different conversion techniques depending on the desired end product. The primary technologies that produce synthetic fuel from syngas are Fischer–Tropsch synthesis and the Mobil process (also known as Methanol-To-Gasoline, or MTG). In the Fischer–Tropsch process syngas reacts in the presence of a catalyst, transforming into liquid products (primarily diesel fuel and jet fuel) and potentially waxes (depending on the FT process employed).The process of producing synfuels through indirect conversion is often referred to as coal-to-liquids (CTL), gas-to-liquids (GTL) or biomass-to-liquids (BTL), depending on the initial feedstock. At least three projects (Ohio River Clean Fuels, Illinois Clean Fuels, and Rentech Natchez) are combining coal and biomass feedstocks, creating hybrid-feedstock synthetic fuels known as Coal and Biomass To Liquids (CBTL).Indirect conversion process technologies can also be used to produce hydrogen, potentially for use in fuel cell vehicles, either as slipstream co-product, or as a primary output. Direct conversion Direct conversion refers to processes in which coal or biomass feedstocks are converted directly into intermediate or final products, avoiding the conversion to syngas via gasification. Direct conversion processes can be broadly broken up into two different methods: Pyrolysis and carbonization, and hydrogenation. Hydrogenation processes One of the main methods of direct conversion of coal to liquids by hydrogenation process is the Bergius process. In this process, coal is liquefied by heating in the presence of hydrogen gas (hydrogenation). Dry coal is mixed with heavy oil recycled from the process. Catalysts are typically added to the mixture. The reaction occurs at between 400 °C (752 °F) to 500 °C (932 °F) and 20 to 70 MPa hydrogen pressure. The reaction can be summarized as follows: n C + ( n + 1 ) H 2 → C n H 2 n + 2 {\displaystyle n{\rm {C}}+(n+1){\rm {H}}_{2}\rightarrow {\rm {C}}_{n}{\rm {H}}_{2n+2}} After World War I several plants were built in Germany; these plants were extensively used during World War II to supply Germany with fuel and lubricants.The Kohleoel Process, developed in Germany by Ruhrkohle and VEBA, was used in the demonstration plant with the capacity of 200 ton of lignite per day, built in Bottrop, Germany. This plant operated from 1981 to 1987. In this process, coal is mixed with a recycle solvent and iron catalyst. After preheating and pressurizing, H2 is added. The process takes place in tubular reactor at the pressure of 300 bar and at the temperature of 470 °C (880 °F). This process was also explored by SASOL in South Africa. In 1970-1980s, Japanese companies Nippon Kokan, Sumitomo Metal Industries and Mitsubishi Heavy Industries developed the NEDOL process. In this process, a mixture of coal and recycled solvent is heated in the presence of iron-based catalyst and H2. The reaction takes place in tubular reactor at temperature between 430 °C (810 °F) and 465 °C (870 °F) at the pressure 150-200 bar. The produced oil has low quality and requires intensive upgrading. H-Coal process, developed by Hydrocarbon Research, Inc., in 1963, mixes pulverized coal with recycled liquids, hydrogen and catalyst in the ebullated bed reactor. Advantages of this process are that dissolution and oil upgrading are taking place in the single reactor, products have high H:C ratio, and a fast reaction time, while the main disadvantages are high gas yield, high hydrogen consumption, and limitation of oil usage only as a boiler oil because of impurities.The SRC-I and SRC-II (Solvent Refined Coal) processes were developed by Gulf Oil and implemented as pilot plants in the United States in the 1960s and 1970s. The Nuclear Utility Services Corporation developed hydrogenation process which was patented by Wilburn C. Schroeder in 1976. The process involved dried, pulverized coal mixed with roughly 1wt% molybdenum catalysts. Hydrogenation occurred by use of high temperature and pressure syngas produced in a separate gasifier. The process ultimately yielded a synthetic crude product, Naphtha, a limited amount of C3/C4 gas, light-medium weight liquids (C5-C10) suitable for use as fuels, small amounts of NH3 and significant amounts of CO2. Other single-stage hydrogenation processes are the Exxon donor solvent process, the Imhausen High-pressure Process, and the Conoco Zinc Chloride Process.A number of two-stage direct liquefaction processes have been developed. After the 1980s only the Catalytic Two-stage Liquefaction Process, modified from the H-Coal Process; the Liquid Solvent Extraction Process by British Coal; and the Brown Coal Liquefaction Process of Japan have been developed.Chevron Corporation developed a process invented by Joel W. Rosenthal called the Chevron Coal Liquefaction Process (CCLP). It is unique due to the close-coupling of the non-catalytic dissolver and the catalytic hydroprocessing unit. The oil produced had properties that were unique when compared to other coal oils; it was lighter and had far fewer heteroatom impurities. The process was scaled-up to the 6 ton per day level, but not proven commercially. Pyrolysis and carbonization processes There are a number of different carbonization processes. The carbonization conversion occurs through pyrolysis or destructive distillation, and it produces condensable coal tar, oil and water vapor, non-condensable synthetic gas, and a solid residue-char. The condensed coal tar and oil are then further processed by hydrogenation to remove sulfur and nitrogen species, after which they are processed into fuels.The typical example of carbonization is the Karrick process. The process was invented by Lewis Cass Karrick in the 1920s. The Karrick process is a low-temperature carbonization process, where coal is heated at 680 °F (360 °C) to 1,380 °F (750 °C) in the absence of air. These temperatures optimize the production of coal tars richer in lighter hydrocarbons than normal coal tar. However, the produced liquids are mostly a by-product and the main product is semi-coke, a solid and smokeless fuel.The COED Process, developed by FMC Corporation, uses a fluidized bed for processing, in combination with increasing temperature, through four stages of pyrolysis. Heat is transferred by hot gases produced by combustion of part of the produced char. A modification of this process, the COGAS Process, involves the addition of gasification of char. The TOSCOAL Process, an analogue to the TOSCO II oil shale retorting process and Lurgi-Ruhrgas process, which is also used for the shale oil extraction, uses hot recycled solids for the heat transfer.Liquid yields of pyrolysis and Karrick processes are generally low for practical use for synthetic liquid fuel production. Furthermore, the resulting liquids are of low quality and require further treatment before they can be used as motor fuels. In summary, there is little possibility that this process will yield economically viable volumes of liquid fuel. Biofuels processes One example of a Biofuel-based synthetic fuel process is Hydrotreated Renewable Jet (HRJ) fuel. There are a number of variants of these processes under development, and the testing and certification process for HRJ aviation fuels is beginning.There are two such process under development by UOP. One using solid biomass feedstocks, and one using bio-oil and fats. The process using solid second-generation biomass sources such as switchgrass or woody biomass uses pyrolysis to produce a bio-oil, which is then catalytically stabilized and deoxygenated to produce a jet-range fuel. The process using natural oils and fats goes through a deoxygenation process, followed by hydrocracking and isomerization to produce a renewable Synthetic Paraffinic Kerosene jet fuel. Oil sand and oil shale processes Synthetic crude may also be created by upgrading bitumen (a tar like substance found in oil sands), or synthesizing liquid hydrocarbons from oil shale. There are a number of processes extracting shale oil (synthetic crude oil) from oil shale by pyrolysis, hydrogenation, or thermal dissolution. Commercialization Worldwide commercial synthetic fuels plant capacity is over 240,000 barrels per day (38,000 m3/d), including indirect conversion Fischer–Tropsch plants in South Africa (Mossgas, Secunda CTL), Qatar {Oryx GTL}, and Malaysia (Shell Bintulu), and a Mobil process (Methanol to Gasoline) plant in New Zealand. Synthetic fuel plant capacity is approximately 0.24% of the 100 million barrel per day crude oil refining capacity worldwide.Sasol, a company based in South Africa operates the world's only commercial Fischer–Tropsch coal-to-liquids facility at Secunda, with a capacity of 150,000 barrels per day (24,000 m3/d). British company Zero, co-founded by former F1 technical director Paddy Lowe, has developed a solution it terms 'petrosynthesis' to develop synthetic fuels and in 2022 it began work on a demonstration production plant at Bicester Heritage near Oxford. Economics The economics of synthetic fuel manufacture vary greatly depending the feedstock used, the precise process employed, site characteristics such as feedstock and transportation costs, and the cost of additional equipment required to control emissions. The examples described below indicate a wide range of production costs between $20/BBL for large-scale gas-to-liquids, to as much as $240/BBL for small-scale biomass-to-liquids and carbon capture and sequestration.In order to be economically viable, projects must do much better than just being competitive head-to-head with oil. They must also generate a sufficient return on investment to justify the capital investment in the project. Security considerations A central consideration for the development of synthetic fuel is the security factor of securing domestic fuel supply from domestic biomass and coal. Nations that are rich in biomass and coal can use synthetic fuel to offset their use of petroleum derived fuels and foreign oil. Environmental considerations The environmental footprint of a given synthetic fuel varies greatly depending on which process is employed, what feedstock is used, what pollution controls are employed, and what the transportation distance and method are for both feedstock procurement and end-product distribution.In many locations, project development will not be possible due to permitting restrictions if a process design is chosen that does not meet local requirements for clean air, water, and increasingly, lifecycle carbon emissions. Lifecycle greenhouse gas emissions Among different indirect FT synthetic fuels production technologies, potential emissions of greenhouse gases vary greatly. Coal to liquids ("CTL") without carbon capture and sequestration ("CCS") is expected to result in a significantly higher carbon footprint than conventional petroleum-derived fuels (+147%). On the other hand, biomass-to-liquids with CCS could deliver a 358% reduction in lifecycle greenhouse gas emissions. Both of these plants fundamentally use gasification and FT conversion synthetic fuels technology, but they deliver wildly divergent environmental footprints. Generally, CTL without CCS has a higher greenhouse gas footprint. CTL with CCS has a 9-15% reduction in lifecycle greenhouse gas emissions compared to that of petroleum derived diesel.CBTL+CCS plants that blend biomass alongside coal while sequestering carbon do progressively better the more biomass is added. Depending on the type of biomass, the assumptions about root storage, and the transportation logistics, at conservatively 40% biomass alongside coal, CBTL+CCS plants achieve a neutral lifecycle greenhouse gas footprint. At more than 40% biomass, they begin to go lifecycle negative, and effectively store carbon in the ground for every gallon of fuels that they produce.Ultimately BTL plants employing CCS could store massive amounts of carbon while producing transportation fuels from sustainably produced biomass feedstocks, although there are a number of significant economic hurdles, and a few technical hurdles that would have to be overcome to enable the development of such facilities.Serious consideration must also be given to the type and method of feedstock procurement for either the coal or biomass used in such facilities, as reckless development could exacerbate environmental problems caused by mountaintop removal mining, land use change, fertilizer runoff, food vs. fuels concerns, or many other potential factors. Or they could not, depending entirely on project-specific factors on a plant-by-plant basis.A study from U.S. Department of Energy National Energy Technology Laboratory with much more in-depth information of CBTL life-cycle emissions "Affordable Low Carbon Diesel from Domestic Coal and Biomass".Hybrid hydrogen-carbon processes have also been proposed recently as another closed-carbon cycle alternative, combining 'clean' electricity, recycled CO, H2 and captured CO2 with biomass as inputs as a way of reducing the biomass needed. Fuels emissions The fuels produced by the various synthetic fuels process also have a wide range of potential environmental performance, though they tend to be very uniform based on the type of synthetic fuels process used (i.e. the tailpipe emissions characteristics of Fischer–Tropsch diesel tend to be the same, though their lifecycle greenhouse gas footprint can vary substantially based on which plant produced the fuel, depending on feedstock and plant level sequestration considerations.)In particular, Fischer–Tropsch diesel and jet fuels deliver dramatic across-the-board reductions in all major criteria pollutants such as SOx, NOx, Particulate Matter, and Hydrocarbon emissions. These fuels, because of their high level of purity and lack of contaminants, allow the use of advanced emissions control equipment. In a 2005 dynamometer study simulating urban driving the combination was shown to virtually eliminate HC, CO, and PM emissions from diesel trucks with a 10% increase in fuel consumption using a Shell gas to liquid fuel fitted with a combination particulate filter and catalytic converter compared to the same trucks unmodified using California Air Resource Board diesel fuel .In testimony before the Subcommittee on Energy and Environment of the U.S. House of Representatives the following statement was made by a senior scientist from Rentech: F-T fuels offer numerous benefits to aviation users. The first is an immediate reduction in particulate emissions. F-T jet fuel has been shown in laboratory combusters and engines to reduce PM emissions by 96% at idle and 78% under cruise operation. Validation of the reduction in other turbine engine emissions is still under way. Concurrent to the PM reductions is an immediate reduction in CO2 emissions from F-T fuel. F-T fuels inherently reduce CO2 emissions because they have higher energy content per carbon content of the fuel, and the fuel is less dense than conventional jet fuel allowing aircraft to fly further on the same load of fuel. The "cleanness" of these FT synthetic fuels is further demonstrated by the fact that they are sufficiently non-toxic and environmentally benign as to be considered biodegradable. This owes primarily to the near-absence of sulfur and extremely low level of aromatics present in the fuel. Sustainability One concern commonly raised about the development of synthetic fuels plants is sustainability. Fundamentally, transitioning from oil to coal or natural gas for transportation fuels production is a transition from one inherently depletable geologically limited resource to another. One of the positive defining characteristics of synthetic fuels production is the ability to use multiple feedstocks (coal, gas, or biomass) to produce the same product from the same plant. In the case of hybrid BCTL plants, some facilities are already planning to use a significant biomass component alongside coal. Ultimately, given the right location with good biomass availability, and sufficiently high oil prices, synthetic fuels plants can be transitioned from coal or gas, over to a 100% biomass feedstock. This provides a path forward towards a renewable fuel source and possibly more sustainable, even if the plant originally produced fuels solely from coal, making the infrastructure forwards-compatible even if the original fossil feedstock runs out.Some synthetic fuels processes can be converted to sustainable production practices more easily than others, depending on the process equipment selected. This is an important design consideration as these facilities are planned and implemented, as additional room must be left in the plant layout to accommodate whatever future plant change requirements in terms of materials handling and gasification might be necessary to accommodate a future change in production profile. For vehicles with Internal Combustion Engines Electrofuels, also known as e-fuels or synthetic fuels, are a type of drop-in replacement fuel. They are manufactured using captured carbon dioxide or carbon monoxide, together with hydrogen obtained from sustainable electricity sources such as wind, solar and nuclear power.The process uses carbon dioxide in manufacturing and releases around the same amount of carbon dioxide into the air when the fuel is burned, for an overall low carbon footprint. Electrofuels are thus an option for reducing greenhouse gas emissions from transport, particularly for long-distance freight, marine, and air transport.The primary targets are butanol, and biodiesel, but include other alcohols and carbon-containing gases such as methane and butane. See also References Synfuel Plants Expand In W. Va (Coal Age, Feb 1, 2002) External links Alliance for Synthetic Fuels in Europe Gas to liquids technology worldwide, ACTED Consultants Archived 2017-02-20 at the Wayback Machine Gasifipedia - Liquid Fuels Archived 2017-03-01 at the Wayback Machine Synfuel Producers Hit Paydirt! Archived 2005-09-03 at the Wayback Machine (NCPA Policy Digest) - an analysis of synfuel subsidies in the USA US DoD launches quest for energy self-sufficiency Jane's Defence Weekly, 25 September 2006 Alberta Oil Sands Discovery Centre Bitumen and Synthetic Crude Oil EU project to convert CO2 to liquid fuels Archived 2008-03-02 at the Wayback Machine Fourth generation synthetic fuels using synthetic life. TED talk by Craig Venter
exxonmobil
ExxonMobil Corporation ( EKS-on-MOH-bəl; commonly shortened to Exxon) is an American multinational oil and gas corporation and the largest direct descendant of John D. Rockefeller's Standard Oil. The company, which took its present name in 1999 per the merger of Exxon and Mobil, is vertically integrated across the entire oil and gas industry, and within it is also a chemicals division which produces plastic, synthetic rubber, and other chemical products. ExxonMobil is headquartered near the Houston suburb of Spring, Texas, though officially incorporated in the U.S. state of New Jersey.: 1 ExxonMobil's history traces its earliest roots to 1866 with the formation of the Vacuum Oil Company, itself acquired by Standard Oil in 1879. The company that is today known as ExxonMobil grew out of the Standard Oil Company of New Jersey (or Jersey Standard for short), the corporate entity which effectively controlled all of Standard Oil prior to its breakup. Jersey Standard grew alongside and with extensive partnership another Standard Oil descendant and its future merger partner, the Standard Oil Company of New York (Socony), both of which grew bigger by merging with various third companies like Humble Oil (which merged with Jersey Standard) and Vacuum Oil (merged with Socony). Both companies underwent rebranding in the 1960s and early 1970s, and by the time of the 1999 merger, Jersey Standard had been known as Exxon, and Socony known as Mobil. The merger agreement between Exxon and Mobil stipulated that Exxon would buy Mobil and rebrand as ExxonMobil, with Mobil's CEO becoming the vice-chairman of the company.ExxonMobil is one of the world's largest and most powerful companies. ExxonMobil since its merger varied from the first to tenth largest publicly traded company by revenue, and has one of the largest market capitalizations out of any company. As of 2023, in the most recent rankings released in the Fortune 500, ExxonMobil was ranked third, and twelfth on the Fortune Global 500. ExxonMobil is the largest investor-owned oil company in the world, the largest oil company headquartered in the Western world, and the largest of the Big Oil companies in both production and market value. ExxonMobil's reserves were 20 billion BOE at the end of 2016 and the 2007 rates of production were expected to last more than 14 years. With 21 oil refineries constituting a combined daily refining capacity of 4.9 million barrels (780,000 m3), ExxonMobil is the second largest oil refiner in the world, trailing only Sinopec. Approximately 55.56% of the company's shares are held by institutions, the largest of which as of 2019 were The Vanguard Group (8.15%), BlackRock (6.61%), and State Street Corporation (4.83%). ExxonMobil has been widely criticized, mostly for environmental incidents and its history of climate change denial against the scientific consensus that fossil fuels significantly contribute to global warming. The company is responsible for many oil spills, the largest and most notable of which was the Exxon Valdez oil spill in Alaska and itself considered to be one of the world's worst oil spills in terms of damage to the environment. The company has also been the target of accusations of human rights violations, excessive influence on America's foreign policy, and its impact on various societies across the world. History ExxonMobil traces its roots to Vacuum Oil Company, founded in 1866. Vacuum Oil later was acquired by Standard Oil in 1879, divested from Standard in 1911 with its breakup, and merged by the Standard Oil Company of New York (Socony), later known as Mobil, in 1931. After the 1911 breakup, Standard Oil continued to exist through its New Jersey subsidiary, sometimes shortened to Jersey Standard, and retained the Standard Oil name in much of the eastern United States. Jersey Standard grew by acquiring Humble Oil in the 1930s and became the dominant oil company on the world stage. The company's lack of ownership over the Standard Oil name across the United States, however, prompted a name change to unify all of its brands under one name, choosing to name itself Exxon in 1972 over continuing to use three distinct brands of Esso, Enco, and Humble.In 1998, the two companies agreed to merge and form ExxonMobil, with the deal closing on November 30, 1999. The two companies cited lower oil prices and a better ability to compete with other state-owned oil companies outside of the United States like Pemex and Aramco. With the deal, the two companies practically merged, with the new company's name containing both of the trade names of its immediate predecessors. However, the structure of the merger provided that Exxon was the surviving company and bought Mobil, rather than a new company being created. Operations ExxonMobil is the largest non-government-owned company in the energy industry and produces about 3% of the world's oil and about 2% of the world's energy.ExxonMobil is vertically integrated into a number of global operating divisions. These divisions are grouped into three categories for reference purposes, though the company also has several standalone divisions, such as Coal & Minerals. It also owns hundreds of smaller subsidiaries such as XTO Energy and SeaRiver Maritime. ExxonMobil also has a majority ownership stake in Imperial Oil. Upstream (oil exploration, extraction, shipping, and wholesale operations) Product Solutions (downstream, chemical) Low Carbon Solutions Upstream The upstream division makes up the majority of ExxonMobil's revenue, accounting for approximately 70% of it. In 2021, ExxonMobil had about 30 billion barrels of oil and oil equivalents, as well as 38.1 billion cubic feet of natural gas.In the United States, ExxonMobil's petroleum exploration and production activities are concentrated in the Permian Basin, Bakken Formation, Woodford Shale, Caney Shale, and the Gulf of Mexico. In addition, ExxonMobil has several gas developments in the regions of Marcellus Shale, Utica Shale, Haynesville Shale, Barnett Shale, and Fayetteville Shale. All natural gas activities are conducted by its subsidiary, XTO Energy. As of December 31, 2014, ExxonMobil owned 14.6 million acres (59,000 km2) in the United States, of which 1.7 million acres (6,900 km2) were offshore, 1.5 million acres (6,100 km2) of which were in the Gulf of Mexico. In California, it has a joint venture called Aera Energy LLC with Shell Oil. In Canada, the company holds 5.4 million acres (22,000 km2), including 1 million acres (4,000 km2) offshore and 0.7 million acres (2,800 km2) of the Kearl Oil Sands Project.In Argentina, ExxonMobil holds 0.9 million acres (3,600 km2) and 4.9 million acres (20,000 km2) in Germany. In the Netherlands ExxonMobil owns 1.5 million acres (6,100 km2), in Norway it owns 0.4 million acres (1,600 km2) offshore, and the United Kingdom 0.6 million acres (2,400 km2) offshore. In Africa, upstream operations are concentrated in Angola, where it owns 0.4 million acres (1,600 km2) offshore, Chad where it owns 46,000 acres (19,000 ha), Equatorial Guinea, where it owns 0.1 million acres (400 km2) offshore, and Nigeria, where it owns 0.8 million acres (3,200 km2) offshore. In addition, ExxonMobil plans to start exploration activities off the coast of Liberia and the Ivory Coast. In the past, ExxonMobil had exploration activities in Madagascar, however these operations were ended due to unsatisfactory results.In Asia, it holds 9,000 acres (3,600 ha) in Azerbaijan, 1.7 million acres (6,900 km2) in Indonesia, of which 1.3 million acres (5,300 km2) are offshore, 0.7 million acres (2,800 km2) in Iraq, 0.3 million acres (1,200 km2) in Kazakhstan, 0.2 million acres (810 km2) in Malaysia, 65,000 acres (26,000 ha) in Qatar, 10,000 acres (4,000 ha) in Yemen, 21,000 acres (8,500 ha) in Thailand, and 81,000 acres (33,000 ha) in the United Arab Emirates.Until the 2022 Russian invasion of Ukraine, ExxonMobil held 85,000 acres (34,000 ha) in the Sakhalin-I project through its subsidiary Exxon Neftegas. Together with Rosneft, it has developed 63.6 million acres (257,000 km2) in Russia, including the East-Prinovozemelsky field. In Australia, ExxonMobil held 1.7 million acres (6,900 km2), including 1.6 million acres (6,500 km2) offshore. It also operates the Longford Gas Conditioning Plant, and participates in the development of Gorgon LNG project. In Papua New Guinea, it holds 1.1 million acres (4,500 km2), including the PNG Gas project. After Russia's 2022 invasion began, though, ExxonMobil announced it was fully pulling out of both Russia and Sakhalin-I, and launched a lawsuit against Russia's federal government on August 30. Product Solutions ExxonMobil formed its Product Solutions division in 2022, combining its previously separate Downstream and Chemical divisions into a single company. Downstream and Retail ExxonMobil markets products around the world under the brands of Exxon, Mobil, and Esso. Mobil is ExxonMobil's primary retail gasoline brand in California, Florida, New York, New England, the Great Lakes, and the Midwest. Exxon is the primary brand in the rest of the United States, with the highest concentration of retail outlets located in New Jersey, Pennsylvania, Texas (shared with Mobil), and in the Mid-Atlantic and Southeastern states. ExxonMobil has stations in 46 states, just behind Shell USA and ahead of Phillips 66, lacking a presence only in Alaska, Hawaii, Iowa, and Kansas.Outside of the United States, Esso and Mobil are primarily used, with Esso operating in 14 countries and Mobil operating in 29 countries and regions.In Japan, ExxonMobil had a 22% stake in TonenGeneral Sekiyu K.K., a refining company that merged into Eneos in 2017.ExxonMobil's primary retail brands worldwide are Exxon, Esso, Mobil, with the former being used exclusively in the United States and the latter two being used in most other countries where ExxonMobil operates. Esso is the only one of its brands not used widely in the United States. Since 2008, Mobil is the only brand for the company lubricants. Since 2018, ExxonMobil has operated a loyalty program, ExxonMobil Rewards+, where customers earn rewards points when filling up at its stations in the United States and later the United Kingdom. Chemicals ExxonMobil Chemical is a petrochemical company that was created by merging Exxon's and Mobil's chemical industries in 1999. Its principal products include basic olefins and aromatics, ethylene glycol, polyethylene, and polypropylene along with speciality lines such as elastomers, plasticizers, solvents, process fluids, oxo alcohols and adhesive resins. The company also produces synthetic lubricant base stocks as well as lubricant additives, propylene packaging films and catalysts. ExxonMobil is the largest producer of butyl rubber. Infineum, a joint venture with Shell plc, is manufacturing and marketing crankcase lubricant additives, fuel additives, and specialty lubricant additives, as well as automatic transmission fluids, gear oils, and industrial oils. Sponsorships Mobil 1, a brand of synthetic motor oil, is a major sponsor of multiple racing teams and as the official motor oil of NASCAR since 2003. ExxonMobil is currently in partnerships with Oracle Red Bull Racing in Formula One and Kalitta Motorsports. Refineries ExxonMobil operates 21 refineries worldwide, and the company claims 80% of its refining capacity is integrated with chemical or lube basestocks. ExxonMobil's largest refinery in the United States is its Baytown Refinery, located in Baytown, Texas, and its largest refinery overall is its Jurong Island facility in Singapore; these two refineries combined output over 1.15 million barrels of oil per day. In 2021, ExxonMobil's global average refining capacity was 4.6 million barrels per day, with the United States producing a plurality of the company's refining capacity at about 1.77 million barrels per day. ExxonMobil's corporate website claims it refines almost 5 million barrels per day.ExxonMobil was one of few U.S. refiners to expand capacity by a significant margin following an industry downturn suffered during the COVID-19 pandemic. The company completed a 250,000 barrels per day expansion at its Beaumont, Texas, refinery in early 2023. Low Carbon Solutions Officially formed with ExxonMobil's 2022 corporate restructuring, and currently led by former General Motors president Dan Ammann, Low Carbon Solutions is the company's alternative energy division. The division intends to lower emissions in hard-to-decarbonize sectors such as heavy industry, commercial transportation, and power generation using a combination of lower-emission fuels, hydrogen, and carbon capture and storage. Low Carbon Solutions conducts research on clean energy technologies, including algae biofuels, biodiesel made from agricultural waste, carbonate fuel cells, and refining crude oil into plastic by using a membrane and osmosis instead of heat. The company speculated in April 2023 that pending good economic conditions, the low-carbon solutions business could eclipse the value of its oil and gas operations.The company is in the process of designing its inaugural large-scale plant dedicated to producing low-carbon hydrogen, situated within its refining and petrochemical complex in Baytown, Texas. This project is set to become the world's largest low-carbon hydrogen project. Carbon capture and storage ExxonMobil publicly announced it would be investing $15 billion in what it deemed a "lower carbon future", and claims to be the world leader in carbon capture and storage (CCS). The company additionally plans that its Scope 1 and Scope 2 emissions will be carbon neutral by 2050. ExxonMobil additionally acquired biofuel company Biojet AS in 2022, and its Canadian subsidiary Imperial Oil is moving ahead with plans to produce a renewable diesel biofuel. In July 2023, Exxon agreed to acquire Denbury Resources for $4.9 billion to further its low-carbon efforts. Corporate affairs Financial data According to Fortune Global 500, ExxonMobil was the second largest company, second largest publicly held corporation, and the largest oil company in the United States by 2017 revenue. For the fiscal year 2020, ExxonMobil reported a loss of US$22.4 billion, with an annual revenue of US$181.5 billion, a decline of 31.5% over the previous fiscal cycle. Headquarters and offices ExxonMobil's headquarters are located in Spring, Texas, a suburb of Houston.The company decided to consolidate its Houston operations into one new campus located in northern Harris County and vacate its offices on 800 Bell St. which it had occupied since 1963. The new operation complex includes twenty office buildings totaling 3,000,000 square feet (280,000 m2), a wellness center, laboratory, and three parking garages. It is designed to house nearly 10,000 employees. Board of directors The current chairman of the board and CEO of ExxonMobil Corp. is Darren W. Woods. Woods was elected chairman of the board and CEO effective January 1, 2017, after the retirement of former chairman and CEO Rex Tillerson. Before his election as chairman and CEO, Woods was elected president of ExxonMobil and a member of the board of directors in 2016.As of July 28, 2021, the current ExxonMobil board members are: Michael J. Angelakis, chair and chief executive officer of Atairos Group Inc. Susan Avery, president emerita of Woods Hole Oceanographic Institution Angela Braly, former president and CEO of WellPoint (now Anthem) Ursula Burns, former chair and CEO of Xerox Gregory J. Goff, former executive vice chair, Marathon Petroleum Kaisa H. Hietala, board professional Joseph L. Hooley, former chair, president and CEO of State Street Steven A. Kandarian, chair, president and CEO of MetLife Alexander A. Karsner, senior strategist at X Development Jeffrey W. Ubben, Founder, Portfolio Manager, and Managing Partner, Inclusive Capital Partners, L.P. Darren W. Woods, chair of the board and CEO, ExxonMobil CorporationHooley is presently the lead independent director, having succeeded former Merck CEO Kenneth Frazier upon his retirement in May 2022. Three of the directors nominated at the last Annual General Meeting were nominated after a proxy battle against hedge fund Engine No.1 and were nominated against the suggestion of the board. Key Executives ExxonMobil's key executives are: Darren Woods, chairman and CEO Neil Chapman, Senior Vice President Kathryn Mikells, CFO and Senior Vice President Jack Williams, Senior Vice President James Spellings, General Tax Counsel and Vice President Controversies Climate change denial ExxonMobil's environmental record has faced much criticism for its stance and impact on global warming. In 2018, the Political Economy Research Institute ranks ExxonMobil tenth among American corporations emitting airborne pollutants, thirteenth by emitting greenhouse gases, and sixteenth by emitting water pollutants. A 2017 report places ExxonMobil as the fifth largest contributor to greenhouse gas emissions from 1988 to 2015. As of 2005, ExxonMobil had committed less than 1% of their profits towards researching alternative energy, which, according to the advocacy organization Ceres, is less than other leading oil companies. According to the 2021 Arctic Environmental Responsibility Index (AERI), ExxonMobil is ranked as the sixth most environmentally responsible company among 120 oil, gas, and mining companies involved in resource extraction north of the Arctic Circle. The company's activities gained international notoriety from many incidents, most notably the Exxon Valdez oil spill in 1989. As of 2020, ExxonMobil has been responsible for more than 3,000 oil spills and leakages which resulted in a loss of more than one barrel of oil, with the most in a single year being 484 spills in 2011. Additionally, since 1965, ExxonMobil has released more than 40 billion tons of carbon dioxide pollution.In 2023, Science journal published a paper reporting that the global warming projections and models created by ExxonMobil's own scientists between 1977 and 2003 had "accurately" projected and "skillfully" modeled global warming due to fossil fuel burning, and had reasonably estimated how much CO2 would lead to dangerous warming. The authors of the paper concluded: "Yet, whereas academic and government scientists worked to communicate what they knew to the public, ExxonMobil worked to deny it."Between the 1980s and 2014, ExxonMobil was a notable denier of climate change, though the company officially changed its position in 2014 to acknowledge the existence of climate change. ExxonMobil's prolonged response incited the creation of the Exxon Knew movement, which aims to hold the company accountable for various climate-related incidents. ExxonMobil has used its own website to attack Exxon Knew, claiming that it is a coordinated effort to defame the company.In December 2022, U.S. House Oversight and Reform Committee Chair Carolyn Maloney and U.S. House Oversight Environment Subcommittee Chair Ro Khanna sent a memorandum to all House Oversight and Reform Committee members summarizing additional findings from the committee's investigation into the fossil fuel industry disinformation campaign to obscure the role of fossil fuels in causing global warming. Upon reviewing internal company documents, they accused ExxonMobil along with BP, Chevron, and Shell of greenwashing their Paris Agreement carbon neutrality pledges while continuing long-term investment in fossil fuel production and sales, for engaging in a campaign to promote the use of natural gas as a clean energy source and bridge fuel to renewable energy, and of intimidating journalists reporting about the companies' climate actions and of obstructing the committee's investigation, which ExxonMobil, Shell, and the American Petroleum Institute denied. Oil spills and plastic pollution ExxonMobil's operations have been subject to numerous oil spills both before and after the 1999 merger. The most widely publicized oil spill was the 1989 Valdez oil spill, where an Exxon tanker discharged approximately 11 million U.S. gallons (42,000 m3) of oil into Prince William Sound, oiling 1,300 miles (2,100 km) of the remote Alaskan coastline. The spill remains the second largest in American history, only trailing BP's Deepwater Horizon spill in the Gulf of Mexico.ExxonMobil was also responsible for various other oil spills across the world. Some of Exxon's largest and most notable oil spills in the United States include long-lasting oil leaks totaling into an estimated 30 million gallon spill into New York City's Newtown Creek over the course of a century by Exxon and other Standard Oil predecessors, a 2011 oil spill which leaked 1,500 barrels of oil into the Yellowstone River (resulting in about $135 million in damages), and a 2012 1,900 barrel (80,000 gallon) spill from the company's Baton Rouge Refinery in the rivers of Point Coupee Parish, Louisiana. ExxonMobil's actives in Louisiana in particular, especially its Baton Rouge Refinery, have given the area the nickname of Cancer Alley. The company's activities, along with other operations and refineries in the area, have been the source of increased cancer infections, lower air quality, and as seen by some, potential environmental racism committed by the company.In May 2021, ExxonMobil topped the Plastic Waste Makers Index report published by the Minderoo Foundation of 20 petrochemical companies that manufactured 55 percent of the single-use plastic waste in the world in 2019 (which were part of a larger group of 100 petrochemical companies that manufactured 90 percent of the waste), while in April 2022, California Attorney General Rob Bonta issued a subpoena to ExxonMobil for information related to the company's role in overstating the effectiveness of plastic recycling in reducing plastic pollution as part of an industry campaign to promote plastic usage. Geopolitical influence and human rights violations ExxonMobil has also been accused of human rights violations and abusing its geopolitical influence. In the book Private Empire by Steve Coll, ExxonMobil is described as extremely powerful "corporate state within the American state" in dealing with the countries in which it drills, going to the point as describing such countries' governments as "constrained". The company's corporate ancestors are also blamed for the outbreak of the 1954 Jebel Akhdar War, which was sparked by the Iraq Petroleum Company's activities. Indonesia Beginning in the late 1980s, ExxonMobil (through predecessor Mobil) hired military units of the Indonesian National Army to provide security for their gas extraction and liquefaction project in Aceh, Indonesia, and these military units were accused of committing human rights violations. ExxonMobil eventually pulled out from Indonesia completely in 2001, while denying any wrongdoing. Other controversies During a 2022 surge in profits among ExxonMobil and other large oil companies, partly due to the war in Ukraine, U.S. President Joe Biden criticized ExxonMobil. In June 2022, amid record oil prices, he said that "Exxon made more money than God this year". When the oil giant reported its second quarter earnings in 2022, CNN reported that Exxon made US$2,245.62 per second in profit across the 92-day long second quarter. See also Esso History of ExxonMobil Litigation involving ExxonMobil: Connecticut v. ExxonMobil Exxon Corp. v Exxon Insurance Consultants International Ltd Kivalina v. ExxonMobil People of the State of New York v. ExxonMobil Notes References Bibliography Further reading External links Official website The ExxonMobil Historical Collection at the Dolph Briscoe Center for American History at the University of Texas Business data for Exxon Mobil Corporation: Exxon Mobil Lobbying Profile – Opensecrets.org
hydrogen economy
The hydrogen economy is an umbrella term that draws together the roles hydrogen can play alongside renewable electricity to decarbonize specific economic sectors, sub-sectors and activities which may be technically difficult to decarbonize through other means, or where cheaper and more energy-efficient clean solutions are not available. In this context, hydrogen economy encompasses hydrogen’s production through to end-uses in ways that substantively contribute to avoiding the use of fossil fuels and mitigating greenhouse gas emissions. Most hydrogen produced today is ‘gray’ hydrogen, made from natural gas through steam methane reforming (SMR) which accounted for 1.8% of global greenhouse gas emissions in 2021. Low-carbon hydrogen, which is made using SMR with carbon capture and storage ('blue' hydrogen), or through electrolysis of water using renewable power ('green' hydrogen), accounted for under 1% of production. Virtually all hydrogen produced is used in oil refining (43% in 2021) and industry (57%), principally in the manufacture of ammonia for fertilizers, and methanol.: 18, 22, 29 In its contribution to limiting global warming to 1.5°C, it is broadly envisaged that the future hydrogen economy replaces gray hydrogen with blue and predominantly green hydrogen, produced in greater total volumes, to provide for an expanded set of end-uses. These are likely to be in heavy industry (e.g. high temperature processes alongside electricity, feedstock for production of green ammonia and organic chemicals, as alternative to coal-derived coke for steelmaking), long-haul transport (e.g. shipping, aviation and to a lesser extent heavy goods vehicles), and long-term energy storage. Other applications, such as light duty vehicles and heating in buildings, are increasingly found to be out of scope for the future hydrogen economy, primarily for economic and environmental reasons. These reasons include the difficulty of developing long-term storage, pipelines, and engine equipment, safety concerns since hydrogen is highly explosive, and the inefficiency of hydrogen compared to direct use of electricity. The extent to which hydrogen will be used to decarbonise appropriate applications in heavy industry, long haul transport and long-term energy storage is likely to be influenced by the evolving production costs of low- and zero-carbon hydrogen. Estimates of future costs face numerous uncertainties – such as the introduction of carbon taxes, geography and geopolitics of energy, energy prices, technology choices, and their raw material requirements – but it is likely that green hydrogen will see the greatest reductions in production cost over time. History and contemporary rationale Origins The concept of the hydrogen economy, though not the term, was by geneticist J.B.S. Haldane in 1923, who, anticipating the exhaustion of Britain’s coal reserves for power generation, proposed a network of wind turbines to produce hydrogen for long-term energy storage through electrolysis, to help address renewable power’s variable output. The term itself was coined by John Bockris during a talk he gave in 1970 at General Motors (GM) Technical Center. Bockris viewed it as an economy in which hydrogen, underpinned by nuclear and solar power, would help address growing concern about fossil fuel depletion and environmental pollution, by serving as energy carrier for end-uses in which electrification was not suitable.A hydrogen economy was proposed by the University of Michigan to solve some of the negative effects of using hydrocarbon fuels where the carbon is released to the atmosphere (as carbon dioxide, carbon monoxide, unburnt hydrocarbons, etc.). Modern interest in the hydrogen economy can generally be traced to a 1970 technical report by Lawrence W. Jones of the University of Michigan, in which he echoed Bockris’ dual rationale of addressing energy security and environmental challenges. Unlike Haldane and Bockris, Jones only focused on nuclear power as the energy source for electrolysis, and principally on the use of hydrogen in transport, where he regarded aviation and heavy goods transport as the top priorities. Later evolution A spike in attention for the hydrogen economy concept during the 2000s was repeatedly described as hype by some critics and proponents of alternative technologies, and investors lost money in the bubble. Interest in the energy carrier resurged in the 2010s, notably with the forming of the World Hydrogen Council in 2017. Several manufacturers released hydrogen fuel cell cars commercially, with manufacturers such as Toyota, Hyundai, and industry groups in China having planned to increase numbers of the cars into the hundreds of thousands over the next decade.The global scope for hydrogen’s role in cars is shrinking relative to earlier expectations. By the end of 2022, 70,200 hydrogen vehicles had been sold worldwide, compared with 26 million plug-in electric vehicles.Contemporary takes on the hydrogen economy share earlier perspectives’ emphasis on the complementarity of electricity and hydrogen, and the use of electrolysis as the mainstay of hydrogen production. They focus on the need to limit global warming to 1.5C and prioritise the production, transportation and use of green hydrogen for heavy industry (e.g. high-temperature processes alongside electricity, feedstock for production of green ammonia and organic chemicals, as alternative to coal-derived coke for steelmaking), long-haul transport (e.g. shipping, aviation and to a lesser extent heavy goods vehicles), and long-term energy storage. Current hydrogen market Hydrogen production globally was valued at over US$155 billion in 2022 and is expected to grow over 9% annually through 2030.In 2021, 94 million tonnes (Mt) of molecular hydrogen (H2) was produced. Of this total, approximately one sixth was as a by-product of petrochemical industry processes. Most hydrogen comes from dedicated production facilities, over 99% of which is from fossil fuels, mainly via steam reforming of natural gas (70%) and coal gasification (30%, almost all of which in China). Less than 1% of dedicated hydrogen production is low carbon: steam fossil fuel reforming with carbon capture and storage, green hydrogen produced using electrolysis, and hydrogen produced from biomass. CO2 emissions from 2021 production, at 915 MtCO2, amounted to 2.5% of energy-related CO2 emissions and 1.8% of global greenhouse gas emissions.Virtually all hydrogen produced for the current market is used in oil refining (40 MtH2 in 2021) and industry (54 MtH2).: 18, 22  In oil refining, hydrogen is used, in a process known as hydrocracking, to convert heavy petroleum sources into lighter fractions suitable for use as fuels. Industrial uses mainly comprise ammonia production to make fertilisers (34 MtH2 in 2021), methanol production (15 MtH2) and the manufacture of direct reduced iron (5 MtH2).: 29 Production As of 2022, more than 95% of global hydrogen production is sourced from fossil gas and coal without carbon abatement.: 1 Color codes Hydrogen is often referred to by various colors to indicate its origin (perhaps because gray symbolizes "dirty hydrogen"). Methods of production Molecular hydrogen was discovered in the Kola Superdeep Borehole. It is unclear how much molecular hydrogen is available in natural reservoirs, but at least one company specializes in drilling wells to extract hydrogen. Most hydrogen in the lithosphere is bonded to oxygen in water. Manufacturing elemental hydrogen requires the consumption of a hydrogen carrier such as a fossil fuel or water. The former carrier consumes the fossil resource and in the steam methane reforming (SMR) process produces greenhouse gas carbon dioxide. However, in the newer methane pyrolysis process no greenhouse gas carbon dioxide is produced. These processes typically require no further energy input beyond the fossil fuel. Decomposing water, the latter carrier, requires electrical or heat input, generated from some primary energy source (fossil fuel, nuclear power or a renewable energy). Hydrogen produced by electrolysis of water using renewable energy sources such as wind and solar power, referred to as green hydrogen. When derived from natural gas by zero greenhouse emission methane pyrolysis, it is referred to as turquoise hydrogen.When fossil fuel derived with greenhouse gas emissions, is generally referred to as grey hydrogen. If most of the carbon dioxide emission is captured, it is referred to as blue hydrogen. Hydrogen produced from coal may be referred to as brown or black hydrogen. Current production methods Steam reforming – gray or blue Hydrogen is industrially produced from steam reforming (SMR), which uses natural gas. The energy content of the produced hydrogen is around 74% of the energy content of the original fuel, as some energy is lost as excess heat during production. In general, steam reforming emits carbon dioxide, a greenhouse gas, and is known as gray hydrogen. If the carbon dioxide is captured and stored, the hydrogen produced is known as blue hydrogen. Electrolysis of water – green, pink or yellow Hydrogen can be made via high pressure electrolysis, low pressure electrolysis of water, or a range of other emerging electrochemical processes such as high temperature electrolysis or carbon assisted electrolysis. However, current best processes for water electrolysis have an effective electrical efficiency of 70-80%, so that producing 1 kg of hydrogen (which has a specific energy of 143 MJ/kg or about 40 kWh/kg) requires 50–55 kWh of electricity. In parts of the world, steam methane reforming is between $1–3/kg on average excluding hydrogen gas pressurization cost. This makes production of hydrogen via electrolysis cost competitive in many regions already, as outlined by Nel Hydrogen and others, including an article by the IEA examining the conditions which could lead to a competitive advantage for electrolysis. A small part (2% in 2019) is produced by electrolysis using electricity and water, consuming approximately 50 to 55 kilowatt-hours of electricity per kilogram of hydrogen produced. Hydrogen as a byproduct of other chemical processes The industrial production of chlorine and caustic soda by electrolysis generates a sizable amount of Hydrogen as a byproduct. In the port of Antwerp a 1MW demonstration fuel cell power plant is powered by such byproduct. This unit has been operational since late 2011. The excess hydrogen is often managed with a hydrogen pinch analysis. Gas generated from coke ovens in steel production is similar to Syngas with 60% hydrogen by volume. The hydrogen can be extracted from the coke oven gas economically. Use as an energy carrier Hydrogen can be deployed as a fuel in two distinct ways: in fuel cells which produce electricity, and via combustion to generate heat. When hydrogen is consumed in fuel cells, the only emission at the point of use is water vapour. Combustion of hydrogen can lead to the thermal formation of harmful nitrogen oxides emissions. Industry In the context of limiting global warming, low-carbon hydrogen (particularly green hydrogen) is likely to play an important role in decarbonising industry. Hydrogen fuel can produce the intense heat required for industrial production of steel, cement, glass, and chemicals, thus contributing to the decarbonisation of industry alongside other technologies, such as electric arc furnaces for steelmaking. However, it is likely to play a larger role in providing industrial feedstock for cleaner production of ammonia and organic chemicals. For example, in steelmaking, hydrogen could function as a clean energy carrier and also as a low-carbon catalyst replacing coal-derived coke.The imperative to use low-carbon hydrogen to reduce greenhouse gas emissions has the potential to reshape the geography of industrial activities, as locations with appropriate hydrogen production potential in different regions will interact in new ways with logistics infrastructure, raw material availability, human and technological capital. Transport Much of the interest in the hydrogen economy concept is focused on the use of fuel cells to power hydrogen vehicles, particularly large trucks. Hydrogen vehicles produce significantly less local air pollution than conventional vehicles. By 2050, the energy requirement for transportation might be between 20% and 30% fulfilled by hydrogen and synthetic fuels.Hydrogen used to decarbonise transportation is likely to find its largest applications in shipping, aviation and to a lesser extent heavy goods vehicles, through the use of hydrogen-derived synthetic fuels such as ammonia and methanol, and fuel cell technology. Hydrogen has been used in fuel cell buses for many years. It is also used as a fuel for spacecraft propulsion. In the light road vehicle segment including passenger cars, by the end of 2022, 70,200 fuel cell electric vehicles had been sold worldwide, compared with 26 million plug-in electric vehicles. With the rapid rise of electric vehicles and associated battery technology and infrastructure, the global scope for hydrogen’s role in cars is shrinking relative to earlier expectations.In the International Energy Agency’s 2022 Net Zero Emissions Scenario (NZE), hydrogen is forecast to account for 2% of rail energy demand in 2050, while 90% of rail travel is expected to be electrified by then (up from 45% today). Hydrogen’s role in rail would likely be focused on lines that prove difficult or costly to electrify. The NZE foresees hydrogen meeting approximately 30% of heavy truck energy demand in 2050, mainly for long-distance heavy freight (with battery electric power accounting for around 60%).Although hydrogen can be used in adapted internal combustion engines, fuel cells, being electrochemical, have an efficiency advantage over heat engines. Fuel cells are more expensive to produce than common internal combustion engines. Buildings Numerous industry groups (gas networks, gas boiler manufacturers) across the natural gas supply chain are promoting hydrogen combustion boilers for space and water heating, and hydrogen appliances for cooking, to reduce energy-related CO2 emissions from residential and commercial buildings. The proposition is that current end-users of piped natural gas can await the conversion of and supply of hydrogen to existing natural gas grids, and then swap heating and cooking appliances, and that there is no need for consumers to do anything now.A review of 32 studies on the question of hydrogen for heating buildings, independent of commercial interests, found that the economics and climate benefits of hydrogen for heating and cooking generally compare very poorly with the deployment of district heating networks, electrification of heating (principally through heat pumps) and cooking, the use of solar thermal, waste heat and the installation of energy efficiency measures to reduce energy demand for heat. Due to inefficiencies in hydrogen production, using blue hydrogen to replace natural gas for heating could require three times as much methane, while using green hydrogen would need two to three times as much electricity as heat pumps. Hybrid heat pumps, which combine the use of an electric heat pump with a hydrogen boiler, may play a role in residential heating in areas where upgrading networks to meet peak electrical demand would otherwise be costly.The widespread use of hydrogen for heating buildings would entail higher energy system costs, higher heating costs and higher environmental impacts than the alternatives, although a niche role may be appropriate in specific contexts and geographies. If deployed, using hydrogen in buildings would drive up the cost of hydrogen for harder-to-decarbonise applications in industry and transport. Energy system balancing and storage Green hydrogen, from electrolysis of water, has the potential to address the variability of renewable energy output. Producing green hydrogen can both reduce the need for renewable power curtailment during periods of high renewables output and be stored long-term to provide for power generation during periods of low output. Ammonia An alternative to gaseous hydrogen as an energy carrier is to bond it with nitrogen from the air to produce ammonia, which can be easily liquefied, transported, and used (directly or indirectly) as a clean and renewable fuel. Among disadvantages of ammonia as an energy carrier are its high toxicity, extremely low energy efficiency of NH3 production from N2 and H2, and poisoning of PEM Fuel Cells by traces of non-decomposed NH3 after NH3 to N2 conversion. Bio-SNG As of 2019 although technically possible production of syngas from hydrogen and carbon-dioxide from bio-energy with carbon capture and storage (BECCS) via the Sabatier reaction is limited by the amount of sustainable bioenergy available: therefore any bio-SNG made may be reserved for production of aviation biofuel. Storage Although molecular hydrogen has very high energy density on a mass basis, partly because of its low molecular weight, as a gas at ambient conditions it has very low energy density by volume. If it is to be used as fuel stored on board the vehicle, pure hydrogen gas must be stored in an energy-dense form to provide sufficient driving range. Because hydrogen is the smallest molecule, it easily escapes from containers, and leaked hydrogen has a global warming effect 11.6 times stronger than CO₂. Pressurized hydrogen gas Increasing gas pressure improves the energy density by volume making for smaller container tanks. The standard material for holding pressurised hydrogen in tube trailers is steel (there is no hydrogen embrittlement problem with hydrogen gas). Tanks made of carbon and glass fibres reinforcing plastic as fitted in Toyota Marai and Kenworth trucks are required to meet safety standards. Few materials are suitable for tanks as hydrogen being a small molecule tends to diffuse through many polymeric materials. The most common on board hydrogen storage in today's 2020 vehicles is hydrogen at pressure 700bar = 70MPa. The energy cost of compressing hydrogen to this pressure is significant. Pressurized gas pipelines are always made of steel and operate at much lower pressures than tube trailers. Liquid hydrogen Alternatively, higher volumetric energy density liquid hydrogen or slush hydrogen may be used. However, liquid hydrogen is cryogenic and boils at 20.268 K (–252.882 °C or –423.188 °F). Cryogenic storage cuts weight but requires large liquification energies. The liquefaction process, involving pressurizing and cooling steps, is energy intensive. The liquefied hydrogen has lower energy density by volume than gasoline by approximately a factor of four, because of the low density of liquid hydrogen – there are actually more oxidizable hydrogen atoms in a litre of gasoline (116 grams) than there are in a litre of pure liquid hydrogen (71 grams). Like any other liquid at cryogenic temperatures, the liquid hydrogen storage tanks must also be well insulated to minimize boil off. Japan has a liquid hydrogen (LH2) storage facility at a terminal in Kobe, and was expected to receive the first shipment of liquid hydrogen via LH2 carrier in 2020. Hydrogen is liquified by reducing its temperature to -253 °C, similar to liquified natural gas (LNG) which is stored at -162 °C. A potential efficiency loss of 12.79% can be achieved, or 4.26kWh/kg out of 33.3kWh/kg. Liquid organic hydrogen carriers (LOHC) Storage as hydride Distinct from storing molecular hydrogen, hydrogen can be stored as a chemical hydride or in some other hydrogen-containing compound. Hydrogen gas is reacted with some other materials to produce the hydrogen storage material, which can be transported relatively easily. At the point of use the hydrogen storage material can be made to decompose, yielding hydrogen gas. As well as the mass and volume density problems associated with molecular hydrogen storage, current barriers to practical storage schemes stem from the high pressure and temperature conditions needed for hydride formation and hydrogen release. For many potential systems hydriding and dehydriding kinetics and heat management are also issues that need to be overcome. A French company McPhy Energy is developing the first industrial product, based on Magnesium Hydrate, already sold to some major clients such as Iwatani and ENEL. Emergent hydride hydrogen storage technologies have achieved a compressed volume of less than 1/500. Adsorption A third approach is to adsorb molecular hydrogen on the surface of a solid storage material. Unlike in the hydrides mentioned above, the hydrogen does not dissociate/recombine upon charging/discharging the storage system, and hence does not suffer from the kinetic limitations of many hydride storage systems. Hydrogen densities similar to liquefied hydrogen can be achieved with appropriate adsorbent materials. Some suggested adsorbents include activated carbon, nanostructured carbons (including CNTs), MOFs, and hydrogen clathrate hydrate. Underground hydrogen storage Underground hydrogen storage is the practice of hydrogen storage in caverns, salt domes and depleted oil and gas fields. Large quantities of gaseous hydrogen have been stored in caverns by ICI for many years without any difficulties. The storage of large quantities of liquid hydrogen underground can function as grid energy storage. The round-trip efficiency is approximately 40% (vs. 75-80% for pumped-hydro (PHES)), and the cost is slightly higher than pumped hydro.Another study referenced by a European staff working paper found that for large scale storage, the cheapest option is hydrogen at €140/MWh for 2,000 hours of storage using an electrolyser, salt cavern storage and combined-cycle power plant. The European project Hyunder indicated in 2013 that for the storage of wind and solar energy an additional 85 caverns are required as it cannot be covered by PHES and CAES systems.A German case study on storage of hydrogen in salt caverns found that if the German power surplus (7% of total variable renewable generation by 2025 and 20% by 2050) would be converted to hydrogen and stored underground, these quantities would require some 15 caverns of 500,000 cubic metres each by 2025 and some 60 caverns by 2050 – corresponding to approximately one third of the number of gas caverns currently operated in Germany. In the US, Sandia Labs are conducting research into the storage of hydrogen in depleted oil and gas fields, which could easily absorb large amounts of renewably produced hydrogen as there are some 2.7 million depleted wells in existence. Infrastructure The hydrogen infrastructure would consist mainly of industrial hydrogen pipeline transport and hydrogen-equipped filling stations like those found on a hydrogen highway. Hydrogen stations which were not situated near a hydrogen pipeline would get supply via hydrogen tanks, compressed hydrogen tube trailers, liquid hydrogen trailers, liquid hydrogen tank trucks or dedicated onsite production. Over 700 miles of hydrogen pipeline currently exist in the United States. Although expensive, pipelines are the cheapest way to move hydrogen over long distances. Hydrogen gas piping is routine in large oil-refineries, because hydrogen is used to hydrocrack fuels from crude oil. Hydrogen embrittlement is not a problem for hydrogen gas pipelines. Hydrogen embrittlement only happens with 'diffusible' hydrogen, i.e. atoms or ions. Hydrogen gas, however, is molecular (H2), and there is a very significant energy barrier to splitting it into atoms.The IEA recommends existing industrial ports be used for production and existing natural gas pipelines for transport: also international co-operation and shipping.South Korea and Japan, which as of 2019 lack international electrical interconnectors, are investing in the hydrogen economy. In March 2020, the Fukushima Hydrogen Energy Research Field was opened in Japan, claiming to be the world's largest hydrogen production facility. The site occupies 180,000 m2 (1,900,000 sq ft) of land, much of which is occupied by a solar array; power from the grid is also used for electrolysis of water to produce hydrogen fuel. A key tradeoff: centralized vs. distributed production In a future full hydrogen economy, primary energy sources and feedstock would be used to produce hydrogen gas as stored energy for use in various sectors of the economy. Producing hydrogen from primary energy sources other than coal and oil would result in lower production of the greenhouse gases characteristic of the combustion of coal and oil fossil energy resources. The importance of non-polluting methane pyrolysis of natural gas is becoming a recognized method for using current natural gas infrastructure investment to produce hydrogen and no greenhouse gas. One key feature of a hydrogen economy would be that in mobile applications (primarily vehicular transport) energy generation and use could be decoupled. The primary energy source would need no longer travel with the vehicle, as it currently does with hydrocarbon fuels. Instead of tailpipes creating dispersed emissions, the energy (and pollution) could be generated from point sources such as large-scale, centralized facilities with improved efficiency. This would allow the possibility of technologies such as carbon sequestration, which are otherwise impossible for mobile applications. Alternatively, distributed energy generation schemes (such as small scale renewable energy sources) could be used, possibly associated with hydrogen stations. Aside from the energy generation, hydrogen production could be centralized, distributed or a mixture of both. While generating hydrogen at centralized primary energy plants promises higher hydrogen production efficiency, difficulties in high-volume, long range hydrogen transportation (due to factors such as hydrogen damage and the ease of hydrogen diffusion through solid materials) makes electrical energy distribution attractive within a hydrogen economy. In such a scenario, small regional plants or even local filling stations could generate hydrogen using energy provided through the electrical distribution grid or methane pyrolysis of natural gas. While hydrogen generation efficiency is likely to be lower than for centralized hydrogen generation, losses in hydrogen transport could make such a scheme more efficient in terms of the primary energy used per kilogram of hydrogen delivered to the end user. The proper balance between hydrogen distribution, long-distance electrical distribution and destination converted pyrolysis of natural gas is one of the primary questions that arises about the hydrogen economy. Again the dilemmas of production sources and transportation of hydrogen can now be overcome using on site (home, business, or fuel station) generation of hydrogen from off grid renewable sources. Distributed electrolysis Distributed electrolysis would bypass the problems of distributing hydrogen by distributing electricity instead. It would use existing electrical networks to transport electricity to small, on-site electrolysers located at filling stations. However, accounting for the energy used to produce the electricity and transmission losses would reduce the overall efficiency. Safety Hydrogen has one of the widest explosive/ignition mix range with air of all the gases with few exceptions such as acetylene, silane, and ethylene oxide, and in terms of minimum necessary ignition energy and mixture ratios has extremely low requirements for an explosion to occur. This means that whatever the mix proportion between air and hydrogen, when ignited in an enclosed space a hydrogen leak will most likely lead to an explosion, not a mere flame. Systems and procedures to avoid accidents involve considering: Inerting hydrogen lines Systems to quickly purge hydrogen or ventilate an area Flaring off hydrogen Ignition source management Taking into account mechanical integrity issues Identifying possible hydrogen gas as a byproduct in certain chemical reactions Detecting hydrogen leaks or flames Inventory management Properly spacing hydrogen and other flammable material Specialized pressurized hydrogen containment tanks, in particular with cryogenic hydrogen Other human factorsThere are many codes and standards regarding hydrogen safety in storage, transport, and use. These range from federal regulations, ANSI/AIAA, NFPA, and ISO standards. The Canadian Hydrogen Safety Program concluded that hydrogen fueling is as safe as, or safer than, compressed natural gas (CNG) fueling, Costs More widespread use of hydrogen in economies entails the need for investment and costs in its production, storage, distribution and use. Estimates of hydrogen’s cost are therefore complex and need to make assumptions about the cost of energy inputs (typically gas and electricity), production plant and method (e.g. green or blue hydrogen), technologies used (e.g. alkaline or proton exchange membrane electrolysers), storage and distribution methods, and how different cost elements might change over time.: 49–65  These factors are incorporated into calculations of the levelized costs of hydrogen (LCOH). The following table shows a range of estimates of the levelized costs of gray, blue, and green hydrogen, expressed in terms of US$ per kg of H2. The range of cost estimates for commercially available hydrogen production methods is broad, As of 2022, gray hydrogen is cheapest to produce without a tax on its CO2 emissions, followed by blue and green hydrogen. Blue hydrogen production costs are not anticipated to fall substantially by 2050,: 28  can be expected to fluctuate with natural gas prices and could face carbon taxes for uncaptured emissions.: 79  The cost of electrolysers fell by 60% from 2010 to 2022, before rising slightly due to an increasing cost of capital. Their cost is projected to fall significantly to 2030 and 2050,: 26  driving down the cost of green hydrogen alongside the falling cost of renewable power generation.: 28  It is cheapest to produce green hydrogen with surplus renewable power that would otherwise be curtailed, which favours electrolysers capable of responding to low and variable power levels.: 5 A 2022 Goldman Sachs analysis anticipates that globally green hydrogen will achieve cost parity with grey hydrogen by 2030, earlier if a global carbon tax is placed on gray hydrogen. In terms of cost per unit of energy, blue and gray hydrogen will always cost more than the fossil fuels used in its production, while green hydrogen will always cost more than the renewable electricity used to make it. Subsidies for clean hydrogen production are much higher in the US and EU than in India. Examples and pilot programs The distribution of hydrogen for the purpose of transportation is being tested around the world, particularly in the US (California, Massachusetts), Canada, Japan, the EU (Portugal, Norway, Denmark, Germany), and Iceland. Several domestic U.S. automobile have developed vehicles using hydrogen, such as GM and Toyota. However, as of February 2020, infrastructure for hydrogen was underdeveloped except in some parts of California. The United States have their own hydrogen policy. A joint venture between NREL and Xcel Energy is combining wind power and hydrogen power in the same way in Colorado. Hydro in Newfoundland and Labrador are converting the current wind-diesel Power System on the remote island of Ramea into a Wind-Hydrogen Hybrid Power Systems facility.A similar pilot project on Stuart Island uses solar power, instead of wind power, to generate electricity. When excess electricity is available after the batteries are fully charged, hydrogen is generated by electrolysis and stored for later production of electricity by fuel cell. The US also have a large natural gas pipeline system already in place.Countries in the EU which have a relatively large natural gas pipeline system already in place include Belgium, Germany, France, and the Netherlands. In 2020, The EU launched its European Clean Hydrogen Alliance (ECHA).The UK started a fuel cell pilot program in January 2004, the program ran two Fuel cell buses on route 25 in London until December 2005, and switched to route RV1 until January 2007. The Hydrogen Expedition is currently working to create a hydrogen fuel cell-powered ship and using it to circumnavigate the globe, as a way to demonstrate the capability of hydrogen fuel cells. In August 2021 the UK Government claimed it was the first to have a Hydrogen Strategy and produced a document.Western Australia's Department of Planning and Infrastructure operated three Daimler Chrysler Citaro fuel cell buses as part of its Sustainable Transport Energy for Perth Fuel Cells Bus Trial in Perth. The buses were operated by Path Transit on regular Transperth public bus routes. The trial began in September 2004 and concluded in September 2007. The buses' fuel cells used a proton exchange membrane system and were supplied with raw hydrogen from a BP refinery in Kwinana, south of Perth. The hydrogen was a byproduct of the refinery's industrial process. The buses were refueled at a station in the northern Perth suburb of Malaga. Iceland has committed to becoming the world's first hydrogen economy by the year 2050. Iceland is in a unique position. Presently, it imports all the petroleum products necessary to power its automobiles and fishing fleet. Iceland has large geothermal resources, so much that the local price of electricity actually is lower than the price of the hydrocarbons that could be used to produce that electricity. Iceland already converts its surplus electricity into exportable goods and hydrocarbon replacements. In 2002, it produced 2,000 tons of hydrogen gas by electrolysis, primarily for the production of ammonia (NH3) for fertilizer. Ammonia is produced, transported, and used throughout the world, and 90% of the cost of ammonia is the cost of the energy to produce it. Neither industry directly replaces hydrocarbons. Reykjavík, Iceland, had a small pilot fleet of city buses running on compressed hydrogen, and research on powering the nation's fishing fleet with hydrogen is under way (for example by companies as Icelandic New Energy). For more practical purposes, Iceland might process imported oil with hydrogen to extend it, rather than to replace it altogether. The Reykjavík buses are part of a larger program, HyFLEET:CUTE, operating hydrogen fueled buses in eight European cities. HyFLEET:CUTE buses were also operated in Beijing, China and Perth, Australia (see below). A pilot project demonstrating a hydrogen economy is operational on the Norwegian island of Utsira. The installation combines wind power and hydrogen power. In periods when there is surplus wind energy, the excess power is used for generating hydrogen by electrolysis. The hydrogen is stored, and is available for power generation in periods when there is little wind.India is said to adopt hydrogen and H-CNG, due to several reasons, amongst which the fact that a national rollout of natural gas networks is already taking place and natural gas is already a major vehicle fuel. In addition, India suffers from extreme air pollution in urban areas. According to some estimates, nearly 80% of India's hydrogen is projected to be green, driven by cost declines and new production technologies.Currently however, hydrogen energy is just at the Research, Development and Demonstration (RD&D) stage. As a result, the number of hydrogen stations may still be low, although much more are expected to be introduced soon.The Turkish Ministry of Energy and Natural Resources and the United Nations Industrial Development Organization have signed a $40 million trust fund agreement in 2003 for the creation of the International Centre for Hydrogen Energy Technologies (UNIDO-ICHET) in Istanbul, which started operation in 2004. A hydrogen forklift, a hydrogen cart and a mobile house powered by renewable energies are being demonstrated in UNIDO-ICHET's premises. An uninterruptible power supply system has been working since April 2009 in the headquarters of Istanbul Sea Buses company. Another indicator of the presence of large natural gas infrastructures already in place in countries and in use by citizens is the number of natural gas vehicles present in the country. The countries with the largest amount of natural gas vehicles are (in order of magnitude):Iran, China, Pakistan, Argentina, India, Brasil, Italy, Colombia, Thailand, Uzbekistan, Bolivia, Armenia, Bangladesh, Egypt, Peru, Ukraine, United States. Natural gas vehicles can also be converted to run on hydrogen. Some hospitals have installed combined electrolyser-storage-fuel cell units for local emergency power. These are advantageous for emergency use because of their low maintenance requirement and ease of location compared to internal combustion driven generators.Also, in some private homes, fuel cell micro-CHP plants can be found, which can operate on hydrogen, or other fuels as natural gas or LPG. When running on natural gas, it relies on steam reforming of natural gas to convert the natural gas to hydrogen prior to use in the fuel cell. This hence still emits CO2 (see reaction) but (temporarily) running on this can be a good solution until the point where the hydrogen is starting to become distributed through the (natural gas) piping system. In October 2021, Queensland Premier Annastacia Palaszczuk and Andrew Forrest announced that Queensland will be home to the world's largest hydrogen plant.German car manufacturer BMW has also been working with hydrogen for years.Saudi Arabia as a part of the NEOM project, is looking to produce roughly 1.2 million tonnes of green ammonia a year, beginning production in 2025.In Australia, the Australian Renewable Energy Agency (ARENA) has invested $55 million in 28 hydrogen projects, from early stage research and development to early stage trials and deployments. The agency's stated goal is to produce hydrogen by electrolysis for $2 per kilogram, announced by Minister for Energy and Emissions Angus Taylor in a 2021 Low Emissions Technology Statement.In August 2021, Chris Jackson quit as chair of the UK Hydrogen and Fuel Cell Association, a leading hydrogen industry association, claiming that UK and Norwegian oil companies had intentionally inflated their cost projections for blue hydrogen in order to maximize future technology support payments by the UK government.Green hydrogen has become more common in France. A €150 million Green Hydrogen Plan was established in 2019, and it calls for building the infrastructure necessary to create, store, and distribute hydrogen as well as using the fuel to power local transportation systems like buses and trains. Corridor H2, a similar initiative, will create a network of hydrogen distribution facilities in Occitania along the route between the Mediterranean and the North Sea. The Corridor H2 project will get a €40 million loan from the EIB. Research and development Timeline of results Experimental production methods Methane pyrolysis – turquoise Pyrolysis of methane (natural gas) with a one-step process bubbling methane through a molten metal catalyst is a "no greenhouse gas" approach to produce hydrogen that was demonstrated in laboratory conditions in 2017 and now being tested at larger scales. The process is conducted at high temperatures (1065 °C). Producing 1 kg of hydrogen requires about 18 kWh of electricity for process heat. The pyrolysis of methane can be expressed by the following reaction equation. CH4(g) → C(s) + 2 H2(g) ΔH° = 74.8 kJ/molThe industrial quality solid carbon may be sold as manufacturing feedstock or landfilled (no pollution). Biological production Fermentative hydrogen production is the fermentative conversion of organic substrate to biohydrogen manifested by a diverse group of bacteria using multi enzyme systems involving three steps similar to anaerobic conversion. Dark fermentation reactions do not require light energy, so they are capable of constantly producing hydrogen from organic compounds throughout the day and night. Photofermentation differs from dark fermentation because it only proceeds in the presence of light. For example, photo-fermentation with Rhodobacter sphaeroides SH2C can be employed to convert small molecular fatty acids into hydrogen. Electrohydrogenesis is used in microbial fuel cells where hydrogen is produced from organic matter (e.g. from sewage, or solid matter) while 0.2 - 0.8 V is applied. Biological hydrogen can be produced in an algae bioreactor. In the late 1990s it was discovered that if the algae is deprived of sulfur it will switch from the production of oxygen, i.e. normal photosynthesis, to the production of hydrogen.Biological hydrogen can be produced in bioreactors that use feedstocks other than algae, the most common feedstock being waste streams. The process involves bacteria feeding on hydrocarbons and excreting hydrogen and CO2. The CO2 can be sequestered successfully by several methods, leaving hydrogen gas. In 2006–2007, NanoLogix first demonstrated a prototype hydrogen bioreactor using waste as a feedstock at Welch's grape juice factory in North East, Pennsylvania (U.S.). Biocatalysed electrolysis Besides regular electrolysis, electrolysis using microbes is another possibility. With biocatalysed electrolysis, hydrogen is generated after running through the microbial fuel cell and a variety of aquatic plants can be used. These include reed sweetgrass, cordgrass, rice, tomatoes, lupines, and algae High-pressure electrolysis High pressure electrolysis is the electrolysis of water by decomposition of water (H2O) into oxygen (O2) and hydrogen gas (H2) by means of an electric current being passed through the water. The difference with a standard electrolyzer is the compressed hydrogen output around 120-200 bar (1740-2900 psi, 12–20 MPa). By pressurising the hydrogen in the electrolyser, through a process known as chemical compression, the need for an external hydrogen compressor is eliminated, the average energy consumption for internal compression is around 3%. European largest (1 400 000 kg/a, High-pressure Electrolysis of water, alkaline technology) hydrogen production plant is operating at Kokkola, Finland. High-temperature electrolysis Hydrogen can be generated from energy supplied in the form of heat and electricity through high-temperature electrolysis (HTE). Because some of the energy in HTE is supplied in the form of heat, less of the energy must be converted twice (from heat to electricity, and then to chemical form), and so potentially far less energy is required per kilogram of hydrogen produced. While nuclear-generated electricity could be used for electrolysis, nuclear heat can be directly applied to split hydrogen from water. High temperature (950–1000 °C) gas cooled nuclear reactors have the potential to split hydrogen from water by thermochemical means using nuclear heat. Research into high-temperature nuclear reactors may eventually lead to a hydrogen supply that is cost-competitive with natural gas steam reforming. General Atomics predicts that hydrogen produced in a High Temperature Gas Cooled Reactor (HTGR) would cost $1.53/kg. In 2003, steam reforming of natural gas yielded hydrogen at $1.40/kg. In 2005 natural gas prices, hydrogen costs $2.70/kg. High-temperature electrolysis has been demonstrated in a laboratory, at 108 MJ (thermal) per kilogram of hydrogen produced, but not at a commercial scale. In addition, this is lower-quality "commercial" grade Hydrogen, unsuitable for use in fuel cells. Photoelectrochemical water splitting Using electricity produced by photovoltaic systems offers the cleanest way to produce hydrogen. Water is broken into hydrogen and oxygen by electrolysis – a photoelectrochemical cell (PEC) process which is also named artificial photosynthesis. William Ayers at Energy Conversion Devices demonstrated and patented the first multijunction high efficiency photoelectrochemical system for direct splitting of water in 1983. This group demonstrated direct water splitting now referred to as an "artificial leaf" or "wireless solar water splitting" with a low cost thin film amorphous silicon multijunction sheet immersed directly in water.Hydrogen evolved on the front amorphous silicon surface decorated with various catalysts while oxygen evolved off the back metal substrate. A Nafion membrane above the multijunction cell provided a path for ion transport. Their patent also lists a variety of other semiconductor multijunction materials for the direct water splitting in addition to amorphous silicon and silicon germanium alloys. Research continues towards developing high-efficiency multi-junction cell technology at universities and the photovoltaic industry. If this process is assisted by photocatalysts suspended directly in water instead of using photovoltaic and an electrolytic system, the reaction is in just one step, which can improve efficiency. Photoelectrocatalytic production A method studied by Thomas Nann and his team at the University of East Anglia consists of a gold electrode covered in layers of indium phosphide (InP) nanoparticles. They introduced an iron-sulfur complex into the layered arrangement, which when submerged in water and irradiated with light under a small electric current, produced hydrogen with an efficiency of 60%.In 2015, it was reported that Panasonic Corp. has developed a photocatalyst based on niobium nitride that can absorb 57% of sunlight to support the decomposition of water to produce hydrogen gas. The company plans to achieve commercial application "as early as possible", not before 2020. Concentrating solar thermal Very high temperatures are required to dissociate water into hydrogen and oxygen. A catalyst is required to make the process operate at feasible temperatures. Heating the water can be achieved through the use of water concentrating solar power. Hydrosol-2 is a 100-kilowatt pilot plant at the Plataforma Solar de Almería in Spain which uses sunlight to obtain the required 800 to 1,200 °C to heat water. Hydrosol II has been in operation since 2008. The design of this 100-kilowatt pilot plant is based on a modular concept. As a result, it may be possible that this technology could be readily scaled up to the megawatt range by multiplying the available reactor units and by connecting the plant to heliostat fields (fields of sun-tracking mirrors) of a suitable size. Thermochemical production There are more than 352 thermochemical cycles which can be used for water splitting, around a dozen of these cycles such as the iron oxide cycle, cerium(IV) oxide-cerium(III) oxide cycle, zinc zinc-oxide cycle, sulfur-iodine cycle, copper-chlorine cycle and hybrid sulfur cycle, aluminum aluminum-oxide cycle, are under research and in testing phase to produce hydrogen and oxygen from water and heat without using electricity. These processes can be more efficient than high-temperature electrolysis, typical in the range from 35% - 49% LHV efficiency. Thermochemical production of hydrogen using chemical energy from coal or natural gas is generally not considered, because the direct chemical path is more efficient. None of the thermochemical hydrogen production processes have been demonstrated at production levels, although several have been demonstrated in laboratories. Microwaving plastics A 97% recovery of hydrogen has been achieved through microwaving plastics for a few seconds that have been ground and mixed with iron oxide and aluminium oxide. Kværner process The Kværner process or Kvaerner carbon black and hydrogen process (CB&H) is a method, developed in the 1980s by a Norwegian company of the same name, for the production of hydrogen from hydrocarbons (CnHm), such as methane, natural gas and biogas. Of the available energy of the feed, approximately 48% is contained in the hydrogen, 40% is contained in activated carbon and 10% in superheated steam.Éric_Claude_Gaucher=== Extraction of naturally-occurring hydrogen === As of 2019, hydrogen is mainly used as an industrial feedstock, primarily for the production of ammonia and methanol, and in petroleum refining. Although initially hydrogen gas was thought not to occur naturally in convenient reservoirs, it is now demonstrated that this is not the case; a hydrogen system is currently being exploited near Bourakebougou, Koulikoro Region in Mali, producing electricity for the surrounding villages. More discoveries of naturally occurring hydrogen in continental, on-shore geological environments have been made in recent years and open the way to the novel field of natural or native hydrogen, supporting energy transition efforts. See also Notes References Sources Hydrogen in a low-carbon economy (PDF). UK Committee on Climate Change. 2018.The Future of Hydrogen. International Energy Agency. 2019. Hydrogen. International Energy Agency. 2022 External links Media related to Hydrogen economy at Wikimedia Commons Quotations related to Hydrogen economy at Wikiquote International Partnership for the Hydrogen Economy European Hydrogen Association Hydrogen energy projects in Australia Online calculator for green hydrogen production and transport costs
natural gas
Natural gas (also called fossil gas, methane gas or simply gas) is a naturally occurring mixture of gaseous hydrocarbons consisting primarily of methane in addition to various smaller amounts of other higher alkanes. Low levels of trace gases like carbon dioxide, nitrogen, hydrogen sulfide, and helium are also usually present. Methane is colorless and odorless, and the second largest greenhouse gas contributor to global climate change after carbon dioxide. Because natural gas is odorless, odorizers such as mercaptan (which smells like sulfur or rotten eggs) are commonly added to it for safety so that leaks can be readily detected.Natural gas is a fossil fuel and non-renewable resource that is formed when layers of organic matter (primarily marine microorganisms) decompose under anaerobic conditions and are subjected to intense heat and pressure underground over millions of years. The energy that the decayed organisms originally obtained from the sun via photosynthesis is stored as chemical energy within the molecules of methane and other hydrocarbons.Natural gas can be burned for heating, cooking, and electricity generation. It is also used as a chemical feedstock in the manufacture of plastics and other commercially important organic chemicals and less commonly used as a fuel for vehicles. The extraction and consumption of natural gas is a major and growing contributor to climate change. Both the gas itself (specifically methane) and carbon dioxide, which is released when natural gas is burned, are greenhouse gases. When burned for heat or electricity, natural gas emits fewer toxic air pollutants, less carbon dioxide, and almost no particulate matter compared to other fossil and biomass fuels. However, gas venting and unintended fugitive emissions throughout the supply chain can result in natural gas having a similar carbon footprint to other fossil fuels overall.Natural gas can be found in underground geological formations, often alongside other fossil fuels like coal and oil (petroleum). Most natural gas has been created through either biogenic or thermogenic processes. Biogenic gas is formed when methanogenic organisms in marshes, bogs, landfills, and shallow sediments anaerobically decompose but are not subjected to high temperatures and pressures. Thermogenic gas takes a much longer period of time to form and is created when organic matter is heated and compressed deep underground. During petroleum production, natural gas is sometimes flared rather than being collected and used. Before natural gas can be burned as a fuel or used in manufacturing processes, it almost always has to be processed to remove impurities such as water. The byproducts of this processing include ethane, propane, butanes, pentanes, and higher molecular weight hydrocarbons. Hydrogen sulfide (which may be converted into pure sulfur), carbon dioxide, water vapor, and sometimes helium and nitrogen must also be removed. Natural gas is sometimes informally referred to simply as "gas", especially when it is being compared to other energy sources, such as oil, coal or renewables. However, it is not to be confused with gasoline, which is often shortened in colloquial usage to "gas", especially in North America.Natural gas is measured in standard cubic meters or standard cubic feet. The density compared to air ranges from 0.58 (16.8 g/mole, 0.71 kg per standard cubic meter) to as high as 0.79 (22.9 g/mole, 0.97 kg per scm), but generally less than 0.64 (18.5 g/mole, 0.78 kg per scm). For comparison, pure methane (16.0425 g/mole) has a density 0.5539 times that of air (0.678 kg per standard cubic meter). Name In the early 1800s, natural gas became known as "natural" to distinguish it from the dominant gas fuel at the time, coal gas. Unlike coal gas, which is manufactured by heating coal, natural gas can be extracted from the ground in its native gaseous form. When the use of natural gas overtook the use of coal gas in English speaking countries in the 20th century, it was increasingly referred to as simply "gas." In order to highlight its role in exacerbating the climate crisis, however, many organizations have criticized the continued use of the word "natural" in referring to the gas. These advocates prefer the term "fossil gas" or "methane gas" as better conveying to the public its climate threat. A 2020 study of Americans' perceptions of the fuel found that, across political identifications, the term "methane gas" led to better estimates of its harms and risks. History Natural gas can come out of the ground and cause a long-burning fire. In ancient Greece, the gas flames at Mount Chimaera contributed to the legend of the fire-breathing creature Chimera. In ancient China, gas resulting from the drilling for brines was first used by about 400 BC. The Chinese transported gas seeping from the ground in crude pipelines of bamboo to where it was used to boil salt water to extract the salt in the Ziliujing District of Sichuan.Natural gas was not widely used before the development of long distance pipelines in the early twentieth century. Before that, most use was near to the source of the well, and the predominant gas for fuel and lighting during the industrial revolution was manufactured coal gas.The history of natural gas in the United States begins with localized use. In the seventeenth century, French missionaries witnessed the American Indians setting fire to natural gas seeps around lake Erie, and scattered observations of these seeps were made by European-descended settlers throughout the eastern seaboard through the 1700s. In 1821, William Hart dug the first commercial natural gas well in the United States at Fredonia, New York, United States, which led in 1858 to the formation of the Fredonia Gas Light Company. Further such ventures followed near wells in other states, until technological innovations allowed the growth of major long distance pipelines from the 1920s onward.By 2009, 66,000 km3 (16,000 cu mi) (or 8%) had been used out of the total 850,000 km3 (200,000 cu mi) of estimated remaining recoverable reserves of natural gas. Sources Natural gas In the 19th century, natural gas was primarily obtained as a by-product of producing oil. The small, light gas carbon chains came out of solution as the extracted fluids underwent pressure reduction from the reservoir to the surface, similar to uncapping a soft drink bottle where the carbon dioxide effervesces. The gas was often viewed as a by-product, a hazard, and a disposal problem in active oil fields. The large volumes produced could not be used until relatively expensive pipeline and storage facilities were constructed to deliver the gas to consumer markets. Until the early part of the 20th century, most natural gas associated with oil was either simply released or burned off at oil fields. Gas venting and production flaring are still practised in modern times, but efforts are ongoing around the world to retire them, and to replace them with other commercially viable and useful alternatives. Unwanted gas (or stranded gas without a market) is often returned to the reservoir with 'injection' wells while awaiting a possible future market or to re-pressurize the formation, which can enhance oil extraction rates from other wells. In regions with a high natural gas demand (such as the US), pipelines are constructed when it is economically feasible to transport gas from a wellsite to an end consumer. In addition to transporting gas via pipelines for use in power generation, other end uses for natural gas include export as liquefied natural gas (LNG) or conversion of natural gas into other liquid products via gas to liquids (GTL) technologies. GTL technologies can convert natural gas into liquids products such as gasoline, diesel or jet fuel. A variety of GTL technologies have been developed, including Fischer–Tropsch (F–T), methanol to gasoline (MTG) and syngas to gasoline plus (STG+). F–T produces a synthetic crude that can be further refined into finished products, while MTG can produce synthetic gasoline from natural gas. STG+ can produce drop-in gasoline, diesel, jet fuel and aromatic chemicals directly from natural gas via a single-loop process. In 2011, Royal Dutch Shell's 140,000 barrels (22,000 m3) per day F–T plant went into operation in Qatar.Natural gas can be "associated" (found in oil fields), or "non-associated" (isolated in natural gas fields), and is also found in coal beds (as coalbed methane). It sometimes contains a significant amount of ethane, propane, butane, and pentane—heavier hydrocarbons removed for commercial use prior to the methane being sold as a consumer fuel or chemical plant feedstock. Non-hydrocarbons such as carbon dioxide, nitrogen, helium (rarely), and hydrogen sulfide must also be removed before the natural gas can be transported.Natural gas extracted from oil wells is called casinghead gas (whether or not truly produced up the annulus and through a casinghead outlet) or associated gas. The natural gas industry is extracting an increasing quantity of gas from challenging, unconventional resource types: sour gas, tight gas, shale gas, and coalbed methane. There is some disagreement on which country has the largest proven gas reserves. Sources that consider that Russia has by far the largest proven reserves include the US Central Intelligence Agency (47,600 km3) and Energy Information Administration (47,800 km3), as well as the Organization of Petroleum Exporting Countries (48,700 km3). Contrarily, BP credits Russia with only 32,900 km3, which would place it in second, slightly behind Iran (33,100 to 33,800 km3, depending on the source). It is estimated that there are about 900,000 km3 of "unconventional" gas such as shale gas, of which 180,000 km3 may be recoverable. In turn, many studies from MIT, Black & Veatch and the US Department of Energy predict that natural gas will account for a larger portion of electricity generation and heat in the future.The world's largest gas field is the offshore South Pars / North Dome Gas-Condensate field, shared between Iran and Qatar. It is estimated to have 51,000 cubic kilometers (12,000 cu mi) of natural gas and 50 billion barrels (7.9 billion cubic meters) of natural gas condensates. Because natural gas is not a pure product, as the reservoir pressure drops when non-associated gas is extracted from a field under supercritical (pressure/temperature) conditions, the higher molecular weight components may partially condense upon isothermic depressurizing—an effect called retrograde condensation. The liquid thus formed may get trapped as the pores of the gas reservoir get depleted. One method to deal with this problem is to re-inject dried gas free of condensate to maintain the underground pressure and to allow re-evaporation and extraction of condensates. More frequently, the liquid condenses at the surface, and one of the tasks of the gas plant is to collect this condensate. The resulting liquid is called natural gas liquid (NGL) and has commercial value. Shale gas Shale gas is natural gas produced from shale. Because shale has matrix permeability too low to allow gas to flow in economical quantities, shale gas wells depend on fractures to allow the gas to flow. Early shale gas wells depended on natural fractures through which gas flowed; almost all shale gas wells today require fractures artificially created by hydraulic fracturing. Since 2000, shale gas has become a major source of natural gas in the United States and Canada. Because of increased shale gas production the United States was in 2014 the number one natural gas producer in the world. The production of shale gas in the United States has been described as a "shale gas revolution" and as "one of the landmark events in the 21st century."Following the increased production in the United States, shale gas exploration is beginning in countries such as Poland, China, and South Africa. Chinese geologists have identified the Sichuan Basin as a promising target for shale gas drilling, because of the similarity of shales to those that have proven productive in the United States. Production from the Wei-201 well is between 10,000 and 20,000 m3 per day. In late 2020, China National Petroleum Corporation claimed daily production of 20 million cubic meters of gas from its Changning-Weiyuan demonstration zone. Town gas Town gas is a flammable gaseous fuel made by the destructive distillation of coal. It contains a variety of calorific gases including hydrogen, carbon monoxide, methane, and other volatile hydrocarbons, together with small quantities of non-calorific gases such as carbon dioxide and nitrogen, and was used in a similar way to natural gas. This is a historical technology and is not usually economically competitive with other sources of fuel gas today. Most town "gashouses" located in the eastern US in the late 19th and early 20th centuries were simple by-product coke ovens that heated bituminous coal in air-tight chambers. The gas driven off from the coal was collected and distributed through networks of pipes to residences and other buildings where it was used for cooking and lighting. (Gas heating did not come into widespread use until the last half of the 20th century.) The coal tar (or asphalt) that collected in the bottoms of the gashouse ovens was often used for roofing and other waterproofing purposes, and when mixed with sand and gravel was used for paving streets. Crystallized natural gas – clathrates Huge quantities of natural gas (primarily methane) exist in the form of clathrates under sediment on offshore continental shelves and on land in arctic regions that experience permafrost, such as those in Siberia. Hydrates require a combination of high pressure and low temperature to form. In 2013, Japan Oil, Gas and Metals National Corporation (JOGMEC) announced that they had recovered commercially relevant quantities of natural gas from methane hydrate. Processing The image below is a schematic block flow diagram of a typical natural gas processing plant. It shows the various unit processes used to convert raw natural gas into sales gas pipelined to the end user markets. The block flow diagram also shows how processing of the raw natural gas yields byproduct sulfur, byproduct ethane, and natural gas liquids (NGL) propane, butanes and natural gasoline (denoted as pentanes +). Demand As of mid-2020, natural gas production in the US had peaked three times, with current levels exceeding both previous peaks. It reached 24.1 trillion cubic feet per year in 1973, followed by a decline, and reached 24.5 trillion cubic feet in 2001. After a brief drop, withdrawals increased nearly every year since 2006 (owing to the shale gas boom), with 2017 production at 33.4 trillion cubic feet and 2019 production at 40.7 trillion cubic feet. After the third peak in December 2019, extraction continued to fall from March onward due to decreased demand caused by the COVID-19 pandemic in the US.The 2021 global energy crisis was driven by a global surge in demand as the world quit the economic recession caused by COVID-19, particularly due to strong energy demand in Asia. Storage and transport Because of its low density, it is not easy to store natural gas or to transport it by vehicle. Natural gas pipelines are impractical across oceans, since the gas needs to be cooled down and compressed, as the friction in the pipeline causes the gas to heat up. Many existing pipelines in the US are close to reaching their capacity, prompting some politicians representing northern states to speak of potential shortages. The large trade cost implies that natural gas markets are globally much less integrated, causing significant price differences across countries. In Western Europe, the gas pipeline network is already dense. New pipelines are planned or under construction between Western Europe and the Near East or Northern Africa.Whenever gas is bought or sold at custody transfer points, rules and agreements are made regarding the gas quality. These may include the maximum allowable concentration of CO2, H2S and H2O. Usually sales quality gas that has been treated to remove contamination is traded on a "dry gas" basis and is required to be commercially free from objectionable odours, materials, and dust or other solid or liquid matter, waxes, gums and gum forming constituents, which might damage or adversely affect operation of equipment downstream of the custody transfer point. LNG carrier ships transport liquefied natural gas (LNG) across oceans, while tank trucks can carry LNG or compressed natural gas (CNG) over shorter distances. Sea transport using CNG carrier ships that are now under development may be competitive with LNG transport in specific conditions.Gas is turned into liquid at a liquefaction plant, and is returned to gas form at regasification plant at the terminal. Shipborne regasification equipment is also used. LNG is the preferred form for long distance, high volume transportation of natural gas, whereas pipeline is preferred for transport for distances up to 4,000 km (2,500 mi) over land and approximately half that distance offshore. CNG is transported at high pressure, typically above 200 bars (20,000 kPa; 2,900 psi). Compressors and decompression equipment are less capital intensive and may be economical in smaller unit sizes than liquefaction/regasification plants. Natural gas trucks and carriers may transport natural gas directly to end-users, or to distribution points such as pipelines. In the past, the natural gas which was recovered in the course of recovering petroleum could not be profitably sold, and was simply burned at the oil field in a process known as flaring. Flaring is now illegal in many countries. Additionally, higher demand in the last 20–30 years has made production of gas associated with oil economically viable. As a further option, the gas is now sometimes re-injected into the formation for enhanced oil recovery by pressure maintenance as well as miscible or immiscible flooding. Conservation, re-injection, or flaring of natural gas associated with oil is primarily dependent on proximity to markets (pipelines), and regulatory restrictions. Natural gas can be indirectly exported through the absorption in other physical output. A recent study suggests that the expansion of shale gas production in the US has caused prices to drop relative to other countries. This has caused a boom in energy intensive manufacturing sector exports, whereby the average dollar unit of US manufacturing exports has almost tripled its energy content between 1996 and 2012.A "master gas system" was invented in Saudi Arabia in the late 1970s, ending any necessity for flaring. Satellite and nearby infra-red camera observations, however, shows that flaring and venting are still happening in some countries. Natural gas is used to generate electricity and heat for desalination. Similarly, some landfills that also discharge methane gases have been set up to capture the methane and generate electricity. Natural gas is often stored underground [references about geological storage needed]inside depleted gas reservoirs from previous gas wells, salt domes, or in tanks as liquefied natural gas. The gas is injected in a time of low demand and extracted when demand picks up. Storage nearby end users helps to meet volatile demands, but such storage may not always be practicable. With 15 countries accounting for 84% of the worldwide extraction, access to natural gas has become an important issue in international politics, and countries vie for control of pipelines. In the first decade of the 21st century, Gazprom, the state-owned energy company in Russia, engaged in disputes with Ukraine and Belarus over the price of natural gas, which have created concerns that gas deliveries to parts of Europe could be cut off for political reasons. The United States is preparing to export natural gas. Floating liquefied natural gas Floating liquefied natural gas (FLNG) is an innovative technology designed to enable the development of offshore gas resources that would otherwise remain untapped due to environmental or economic factors which currently make them impractical to develop via a land-based LNG operation. FLNG technology also provides a number of environmental and economic advantages: Environmental – Because all processing is done at the gas field, there is no requirement for long pipelines to shore, compression units to pump the gas to shore, dredging and jetty construction, and onshore construction of an LNG processing plant, which significantly reduces the environmental footprint. Avoiding construction also helps preserve marine and coastal environments. In addition, environmental disturbance will be minimised during decommissioning because the facility can easily be disconnected and removed before being refurbished and re-deployed elsewhere. Economic – Where pumping gas to shore can be prohibitively expensive, FLNG makes development economically viable. As a result, it will open up new business opportunities for countries to develop offshore gas fields that would otherwise remain stranded, such as those offshore East Africa.Many gas and oil companies are considering the economic and environmental benefits of floating liquefied natural gas (FLNG). There are currently projects underway to construct five FLNG facilities. Petronas is close to completion on their FLNG-1 at Daewoo Shipbuilding and Marine Engineering and are underway on their FLNG-2 project at Samsung Heavy Industries. Shell Prelude is due to start production 2017. The Browse LNG project will commence FEED in 2019. Uses Natural gas is primarily used in the northern hemisphere. North America and Europe are major consumers. Often well head gases require removal of various hydrocarbon molecules contained within the gas. Some of these gases include heptane, pentane, propane and other hydrocarbons with molecular weights above methane (CH4). The natural gas transmission lines extend to the natural gas processing plant or unit which removes the higher-molecular weight hydrocarbons to produce natural gas with energy content between 35–39 megajoules per cubic metre (950–1,050 British thermal units per cubic foot). The processed natural gas may then be used for residential, commercial and industrial uses. Mid-stream natural gas Natural gas flowing in the distribution lines is called mid-stream natural gas and is often used to power engines which rotate compressors. These compressors are required in the transmission line to pressurize and repressurize the mid-stream natural gas as the gas travels. Typically, natural gas powered engines require 35–39 MJ/m3 (950–1,050 BTU/cu ft) natural gas to operate at the rotational name plate specifications. Several methods are used to remove these higher molecular weighted gases for use by the natural gas engine. A few technologies are as follows: Joule–Thomson skid Cryogenic or chiller system Chemical enzymology system Power generation Domestic use In the US, over one-third of households (>40 million homes) cook with gas. Natural gas dispensed in a residential setting can generate temperatures in excess of 1,100 °C (2,000 °F) making it a powerful domestic cooking and heating fuel. Stanford scientists estimated that gas stoves emit 0.8–1.3% of the gas they use as unburned methane and that total U.S. stove emissions are 28.1 gigagrams of methane. In much of the developed world it is supplied through pipes to homes, where it is used for many purposes including ranges and ovens, heating/cooling, outdoor and portable grills, and central heating. Heaters in homes and other buildings may include boilers, furnaces, and water heaters. Both North America and Europe are major consumers of natural gas. Domestic appliances, furnaces, and boilers use low pressure, usually with a standard preassure around 1.7 kilopascals (0.25 psi) over atmospheric preassure. The pressures in the supply lines vary, either the standard utilization pressure (UP) mentioned above or elevated pressure (EP), which may be anywhere from 7 to 800 kilopascals (1 to 120 psi) over atmospheric pressure. Systems using EP have a regulator at the service entrance to step down to UP.Natural gas piping systems inside buildings are often designed with pressures of 14 to 34 kilopascals (2 to 5 psi), and have downstream pressure regulators to reduce pressure as needed. In the United States the maximum allowable operating pressure for natural gas piping systems within a building is based on NFPA 54: National Fuel Gas Code, except when approved by the Public Safety Authority or when insurance companies have more stringent requirements. Generally, natural gas system pressures are not allowed to exceed 5 psi (34 kPa) unless all of the following conditions are met: The AHJ will allow a higher pressure. The distribution pipe is welded. (Note: 2. Some jurisdictions may also require that welded joints be radiographed to verify continuity). The pipes are closed for protection and placed in a ventilated area that does not allow gas accumulation. The pipe is installed in the areas used for industrial processes, research, storage or mechanical equipment rooms.Generally, a maximum liquefied petroleum gas pressure of 20 psi (140 kPa) is allowed, provided the building is used specifically for industrial or research purposes and is constructed in accordance with NFPA 58: Liquefied Petroleum Gas Code, Chapter 7.A seismic earthquake valve operating at a pressure of 55 psig (3.7 bar) can stop the flow of natural gas into the site wide natural gas distribution piping network (that runs (outdoors underground, above building roofs, and or within the upper supports of a canopy roof). Seismic earthquake valves are designed for use at a maximum of 60 psig.In Australia, natural gas is transported from gas processing facilities to regulator stations via transmission pipelines. Gas is then regulated down to distributed pressures and the gas is distributed around a gas network via gas mains. Small branches from the network, called services, connect individual domestic dwellings, or multi-dwelling buildings to the network. The networks typically range in pressures from 7 kPa (low pressure) to 515 kPa (high pressure). Gas is then regulated down to 1.1 kPa or 2.75 kPa, before being metered and passed to the consumer for domestic use. Natural gas mains are made from a variety of materials: historically cast iron, though more modern mains are made from steel or polyethylene. In some states in the USA natural gas can be supplied by independent natural gas wholesalers/suppliers using existing pipeline owners' infrastructure through Natural Gas Choice programs. LPG (liquefied petroleum gas) typically fuels outdoor and portable grills. Although, compressed natural gas (CNG) is sparsely available for similar applications in the US in rural areas underserved by the existing pipeline system and distribution network of the less expensive and more abundant LPG (liquefied petroleum gas). Transportation CNG is a cleaner and also cheaper alternative to other automobile fuels such as gasoline (petrol). By the end of 2014, there were over 20 million natural gas vehicles worldwide, led by Iran (3.5 million), China (3.3 million), Pakistan (2.8 million), Argentina (2.5 million), India (1.8 million), and Brazil (1.8 million). The energy efficiency is generally equal to that of gasoline engines, but lower compared with modern diesel engines. Gasoline/petrol vehicles converted to run on natural gas suffer because of the low compression ratio of their engines, resulting in a cropping of delivered power while running on natural gas (10–15%). CNG-specific engines, however, use a higher compression ratio due to this fuel's higher octane number of 120–130.Besides use in road vehicles, CNG can also be used in aircraft. Compressed natural gas has been used in some aircraft like the Aviat Aircraft Husky 200 CNG and the Chromarat VX-1 KittyHawkLNG is also being used in aircraft. Russian aircraft manufacturer Tupolev for instance is running a development program to produce LNG- and hydrogen-powered aircraft. The program has been running since the mid-1970s, and seeks to develop LNG and hydrogen variants of the Tu-204 and Tu-334 passenger aircraft, and also the Tu-330 cargo aircraft. Depending on the current market price for jet fuel and LNG, fuel for an LNG-powered aircraft could cost 5,000 rubles (US$100) less per tonne, roughly 60%, with considerable reductions to carbon monoxide, hydrocarbon and nitrogen oxide emissions.The advantages of liquid methane as a jet engine fuel are that it has more specific energy than the standard kerosene mixes do and that its low temperature can help cool the air which the engine compresses for greater volumetric efficiency, in effect replacing an intercooler. Alternatively, it can be used to lower the temperature of the exhaust. Fertilizers Natural gas is a major feedstock for the production of ammonia, via the Haber process, for use in fertilizer production. The development of synthetic nitrogen fertilizer has significantly supported global population growth — it has been estimated that almost half the people on the Earth are currently fed as a result of synthetic nitrogen fertilizer use. Hydrogen Natural gas can be used to produce hydrogen, with one common method being the hydrogen reformer. Hydrogen has many applications: it is a primary feedstock for the chemical industry, a hydrogenating agent, an important commodity for oil refineries, and the fuel source in hydrogen vehicles. Animal and fish feed Protein rich animal and fish feed is produced by feeding natural gas to Methylococcus capsulatus bacteria on commercial scale. Olefins(alkenes) Natural gas components(alkanes) can be converted into olefins(alkenes) or other chemical synthesis. Ethane by oxidative dehydrogenation converts to ethylene, which can be further converted to ethylene oxide, ethylene glycol, acetaldehyde or other olefins. Propane by oxidative hydrogenation converts to propylene or can be oxidized to acrylic acid and acrylonitrile. Other Natural gas is also used in the manufacture of fabrics, glass, steel, plastics, paint, synthetic oil, and other products.Fuel for industrial heating and desiccation processes. Raw material for large-scale fuel production using gas-to-liquid (GTL) process (e.g. to produce sulphur-and aromatic-free diesel with low-emission combustion). Environmental effects Greenhouse effect and natural gas release Human activity is responsible for about 60% of all methane emissions and for most of the resulting increase in atmospheric methane. Natural gas is intentionally released or is otherwise known to leak during the extraction, storage, transportation, and distribution of fossil fuels. Globally, methane accounts for an estimated 33% of anthropogenic greenhouse gas warming. The decomposition of municipal solid waste (a source of landfill gas) and wastewater account for an additional 18% of such emissions. These estimates include substantial uncertainties which should be reduced in the near future with improved satellite measurements, such as those planned for MethaneSAT.After release to the atmosphere, methane is removed by gradual oxidation to carbon dioxide and water by hydroxyl radicals (OH−) formed in the troposphere or stratosphere, giving the overall chemical reaction CH4 + 2O2 → CO2 + 2H2O. While the lifetime of atmospheric methane is relatively short when compared to carbon dioxide, with a half-life of about 7 years, it is more efficient at trapping heat in the atmosphere, so that a given quantity of methane has 84 times the global-warming potential of carbon dioxide over a 20-year period and 28 times over a 100-year period. Natural gas is thus a potent greenhouse gas due to the strong radiative forcing of methane in the short term, and the continuing effects of carbon dioxide in the longer term.Targeted efforts to reduce warming quickly by reducing anthropogenic methane emissions is a climate change mitigation strategy supported by the Global Methane Initiative. Greenhouse gas emissions When refined and burned, natural gas can produce 25–30% less carbon dioxide per joule delivered than oil, and 40–45% less than coal. It can also produce potentially fewer toxic pollutants than other hydrocarbon fuels. However, compared to other major fossil fuels, natural gas causes more emissions in relative terms during the production and transportation of the fuel, meaning that the life cycle greenhouse gas emissions are about 50% higher than the direct emissions from the site of consumption.In terms of the warming effect over 100 years, natural gas production and use comprises about one fifth of human greenhouse gas emissions, and this contribution is growing rapidly. Globally, natural gas use emitted about 7.8 billion tons of CO2 in 2020 (including flaring), while coal and oil use emitted 14.4 and 12 billion tons, respectively. The IEA estimates the energy sector (oil, natural gas, coal and bioenergy) to be responsible for about 40% of human methane emissions. According to the IPCC Sixth Assessment Report, natural gas consumption grew by 15% between 2015 and 2019, compared to a 5% increase in oil and oil product consumption.The continued financing and construction of new gas pipelines indicates that huge emissions of fossil greenhouse gases could be locked-in for 40 to 50 years into the future. In the U.S. state of Texas alone, five new long-distance gas pipelines have been under construction, with the first entering service in 2019, and the others scheduled to come online during 2020–2022.: 23 Installation bans To reduce its greenhouse emissions, the Netherlands is subsidizing a transition away from natural gas for all homes in the country by 2050. In Amsterdam, no new residential gas accounts have been allowed since 2018, and all homes in the city are expected to be converted by 2040 to use the excess heat from adjacent industrial buildings and operations. Some cities in the United States have started prohibiting gas hookups for new houses, with state laws passed and under consideration to either require electrification or prohibit local requirements. New gas appliance hookups are banned in New York State and the Australian Capital Territory. Additionally, the state of Victoria in Australia is set to implement a ban on new natural gas hookups starting from January 1, 2024, as part of its gas substitution roadmap.The UK government is also experimenting with alternative home heating technologies to meet its climate goals. To preserve their businesses, natural gas utilities in the United States have been lobbying for laws preventing local electrification ordinances, and are promoting renewable natural gas and hydrogen fuel. Other pollutants Although natural gas produces far lower amounts of sulfur dioxide and nitrogen oxides (NOx) than other fossil fuels, NOx from burning natural gas in homes can be a health hazard. Radionuclides Natural gas extraction also produces radioactive isotopes of polonium (Po-210), lead (Pb-210) and radon (Rn-220). Radon is a gas with initial activity from 5 to 200,000 becquerels per cubic meter of gas. It decays rapidly to Pb-210 which can build up as a thin film in gas extraction equipment. Safety concerns The natural gas extraction workforce face unique health and safety challenges. Production Some gas fields yield sour gas containing hydrogen sulfide (H2S), a toxic compound when inhaled. Amine gas treating, an industrial scale process which removes acidic gaseous components, is often used to remove hydrogen sulfide from natural gas.Extraction of natural gas (or oil) leads to decrease in pressure in the reservoir. Such decrease in pressure in turn may result in subsidence, sinking of the ground above. Subsidence may affect ecosystems, waterways, sewer and water supply systems, foundations, and so on. Fracking Releasing natural gas from subsurface porous rock formations may be accomplished by a process called hydraulic fracturing or "fracking". Since the first commercial hydraulic fracturing operation in 1949, approximately one million wells have been hydraulically fractured in the United States. The production of natural gas from hydraulically fractured wells has used the technological developments of directional and horizontal drilling, which improved access to natural gas in tight rock formations. Strong growth in the production of unconventional gas from hydraulically fractured wells occurred between 2000 and 2012.In hydraulic fracturing, well operators force water mixed with a variety of chemicals through the wellbore casing into the rock. The high pressure water breaks up or "fracks" the rock, which releases gas from the rock formation. Sand and other particles are added to the water as a proppant to keep the fractures in the rock open, thus enabling the gas to flow into the casing and then to the surface. Chemicals are added to the fluid to perform such functions as reducing friction and inhibiting corrosion. After the "frack", oil or gas is extracted and 30–70% of the frack fluid, i.e. the mixture of water, chemicals, sand, etc., flows back to the surface. Many gas-bearing formations also contain water, which will flow up the wellbore to the surface along with the gas, in both hydraulically fractured and non-hydraulically fractured wells. This produced water often has a high content of salt and other dissolved minerals that occur in the formation.The volume of water used to hydraulically fracture wells varies according to the hydraulic fracturing technique. In the United States, the average volume of water used per hydraulic fracture has been reported as nearly 7,375 gallons for vertical oil and gas wells prior to 1953, nearly 197,000 gallons for vertical oil and gas wells between 2000 and 2010, and nearly 3 million gallons for horizontal gas wells between 2000 and 2010.Determining which fracking technique is appropriate for well productivity depends largely on the properties of the reservoir rock from which to extract oil or gas. If the rock is characterized by low-permeability – which refers to its ability to let substances, i.e. gas, pass through it, then the rock may be considered a source of tight gas. Fracking for shale gas, which is currently also known as a source of unconventional gas, involves drilling a borehole vertically until it reaches a lateral shale rock formation, at which point the drill turns to follow the rock for hundreds or thousands of feet horizontally. In contrast, conventional oil and gas sources are characterized by higher rock permeability, which naturally enables the flow of oil or gas into the wellbore with less intensive hydraulic fracturing techniques than the production of tight gas has required. The decades in development of drilling technology for conventional and unconventional oil and gas production have not only improved access to natural gas in low-permeability reservoir rocks, but also posed significant adverse impacts on environmental and public health.The US EPA has acknowledged that toxic, carcinogenic chemicals, i.e. benzene and ethylbenzene, have been used as gelling agents in water and chemical mixtures for high volume horizontal fracturing (HVHF). Following the hydraulic fracture in HVHF, the water, chemicals, and frack fluid that return to the well's surface, called flowback or produced water, may contain radioactive materials, heavy metals, natural salts, and hydrocarbons which exist naturally in shale rock formations. Fracking chemicals, radioactive materials, heavy metals, and salts that are removed from the HVHF well by well operators are so difficult to remove from the water they are mixed with, and would so heavily pollute the water cycle, that most of the flowback is either recycled into other fracking operations or injected into deep underground wells, eliminating the water that HVHF required from the hydrologic cycle.Historically low gas prices have delayed the nuclear renaissance, as well as the development of solar thermal energy. Added odor Natural gas in its native state is colorless and almost odorless. In order to assist consumers in detecting leaks, an odorizer with a scent similar to rotten eggs, tert-Butylthiol (t-butyl mercaptan), is added. Sometimes a related compound, thiophane, may be used in the mixture. Situations in which an odorant that is added to natural gas can be detected by analytical instrumentation, but cannot be properly detected by an observer with a normal sense of smell, have occurred in the natural gas industry. This is caused by odor masking, when one odorant overpowers the sensation of another. As of 2011, the industry is conducting research on the causes of odor masking. Risk of explosion Explosions caused by natural gas leaks occur a few times each year. Individual homes, small businesses and other structures are most frequently affected when an internal leak builds up gas inside the structure. Leaks often result from excavation work, such as when contractors dig and strike pipelines, sometimes without knowing any damage resulted. Frequently, the blast is powerful enough to significantly damage a building but leave it standing. In these cases, the people inside tend to have minor to moderate injuries. Occasionally, the gas can collect in high enough quantities to cause a deadly explosion, destroying one or more buildings in the process. Many building codes now forbid the installation of gas pipes inside cavity walls or below floor boards to mitigate against this risk. Gas usually dissipates readily outdoors, but can sometimes collect in dangerous quantities if flow rates are high enough. However, considering the tens of millions of structures that use the fuel, the individual risk of using natural gas is low. Risk of carbon monoxide inhalation Natural gas heating systems may cause carbon monoxide poisoning if unvented or poorly vented. Improvements in natural gas furnace designs have greatly reduced CO poisoning concerns. Detectors are also available that warn of carbon monoxide or explosive gases such as methane and propane. Energy content, statistics, and pricing Quantities of natural gas are measured in standard cubic meters (cubic meter of gas at temperature 15 °C (59 °F) and pressure 101.325 kPa (14.6959 psi)) or standard cubic feet (cubic foot of gas at temperature 60.0 °F and pressure 14.73 psi (101.6 kPa)), 1 standard cubic meter = 35.301 standard cubic feet. The gross heat of combustion of commercial quality natural gas is around 39 MJ/m3 (0.31 kWh/cu ft), but this can vary by several percent. This is about 50 to 54 MJ/kg depending on the density. For comparison, the heat of combustion of pure methane is 37.7 MJ per standard cubic metre, or 55.5 MJ/kg. Except in the European Union, the U.S., and Canada, natural gas is sold in gigajoule retail units. LNG (liquefied natural gas) and LPG (liquefied petroleum gas) are traded in metric tonnes (1,000 kg) or million BTU as spot deliveries. Long term natural gas distribution contracts are signed in cubic meters, and LNG contracts are in metric tonnes. The LNG and LPG is transported by specialized transport ships, as the gas is liquified at cryogenic temperatures. The specification of each LNG/LPG cargo will usually contain the energy content, but this information is in general not available to the public. The European Union aimed to cut its gas dependency on Russia by two-thirds in 2022.In August 2015, possibly the largest natural gas discovery in history was made and notified by an Italian gas company ENI. The energy company indicated that it has unearthed a "supergiant" gas field in the Mediterranean Sea covering about 40 square miles (100 km2). This was named the Zohr gas field and could hold a potential 30 trillion cubic feet (850 billion cubic meters) of natural gas. ENI said that the energy is about 5.5 billion barrels of oil equivalent [BOE] (3.4×1010 GJ). The Zohr field was found in the deep waters off the northern coast of Egypt and ENI claims that it will be the largest ever in the Mediterranean and even the world. European Union Gas prices for end users vary greatly across the EU. A single European energy market, one of the key objectives of the EU, should level the prices of gas in all EU member states. Moreover, it would help to resolve supply and global warming issues, as well as strengthen relations with other Mediterranean countries and foster investments in the region. Qatar has been asked by the US to supply emergency gas to the EU in case of supply disruptions in the Russo-Ukrainian crisis. United States In US units, one standard cubic foot (28 L) of natural gas produces around 1,028 British thermal units (1,085 kJ). The actual heating value when the water formed does not condense is the net heat of combustion and can be as much as 10% less.In the United States, retail sales are often in units of therms (th); 1 therm = 100,000 BTU. Gas sales to domestic consumers are often in units of 100 standard cubic feet (scf). Gas meters measure the volume of gas used, and this is converted to therms by multiplying the volume by the energy content of the gas used during that period, which varies slightly over time. The typical annual consumption of a single family residence is 1,000 therms or one Residential Customer Equivalent (RCE). Wholesale transactions are generally done in decatherms (Dth), thousand decatherms (MDth), or million decatherms (MMDth). A million decatherms is a trillion BTU, roughly a billion cubic feet of natural gas. The price of natural gas varies greatly depending on location and type of consumer. The typical caloric value of natural gas is roughly 1,000 BTU per cubic foot, depending on gas composition. Natural gas in the United States is traded as a futures contract on the New York Mercantile Exchange. Each contract is for 10,000 million BTU or 10 billion BTU (10,551 GJ). Thus, if the price of gas is $10/million BTU on the NYMEX, the contract is worth $100,000. Canada Canada uses metric measure for internal trade of petrochemical products. Consequently, natural gas is sold by the gigajoule (GJ), cubic meter (m3) or thousand cubic meters (E3m3). Distribution infrastructure and meters almost always meter volume (cubic foot or cubic meter). Some jurisdictions, such as Saskatchewan, sell gas by volume only. Other jurisdictions, such as Alberta, gas is sold by the energy content (GJ). In these areas, almost all meters for residential and small commercial customers measure volume (m3 or ft3), and billing statements include a multiplier to convert the volume to energy content of the local gas supply. A gigajoule (GJ) is a measure approximately equal to 80 litres (0.5 barrels) of oil, or 28 m3 or 1,000 cu ft or 1 million BTUs of gas. The energy content of gas supply in Canada can vary from 37 to 43 MJ/m3 (990 to 1,150 BTU/cu ft) depending on gas supply and processing between the wellhead and the customer. Adsorbed natural gas (ANG) Natural gas may be stored by adsorbing it to the porous solids called sorbents. The optimal condition for methane storage is at room temperature and atmospheric pressure. Pressures up to 4 MPa (about 40 times atmospheric pressure) will yield greater storage capacity. The most common sorbent used for ANG is activated carbon (AC), primarily in three forms: Activated Carbon Fiber (ACF), Powdered Activated Carbon (PAC), and activated carbon monolith. See also Associated petroleum gas Energy transition Gas/oil ratio Natural gas by country Peak gas Power-to-gas Renewable natural gas Strategic natural gas reserve World energy supply and consumption References Further reading Blanchard, Charles (2020). The Extraction State: A History of Natural Gas in America. U of Pittsburgh Press. online review. External links Global Fossil Infrastructure Tracker Global Oil & Gas Exit List (GOGEL) by Urgewald Carbon Mapper Data Portal featuring methane point source data
steelmaking
Steelmaking is the process of producing steel from iron ore and/or scrap. In steelmaking, impurities such as nitrogen, silicon, phosphorus, sulfur and excess carbon (the most important impurity) are removed from the sourced iron, and alloying elements such as manganese, nickel, chromium, carbon and vanadium are added to produce different grades of steel. Steelmaking has existed for millennia, but it was not commercialized on a massive scale until the mid-19th century. An ancient process of steelmaking was the crucible process. In the 1850s and 1860s, the Bessemer process and the Siemens-Martin process turned steelmaking into a heavy industry. Today there are two major commercial processes for making steel, namely basic oxygen steelmaking, which has liquid pig-iron from the blast furnace and scrap steel as the main feed materials, and electric arc furnace (EAF) steelmaking, which uses scrap steel or direct reduced iron (DRI) as the main feed materials. Oxygen steelmaking is fueled predominantly by the exothermic nature of the reactions inside the vessel; in contrast, in EAF steelmaking, electrical energy is used to melt the solid scrap and/or DRI materials. In recent times, EAF steelmaking technology has evolved closer to oxygen steelmaking as more chemical energy is introduced into the process.Steelmaking is one of the most carbon emission intensive industries in the world. As of 2020, steelmaking is responsible for about 10% of greenhouse gas emissions. To mitigate global warming, the industry will need to find significant reductions in emissions. In 2020, McKinsey identified a number of technologies that could potentially offer some emission reductions, including carbon capture and reuse during manufacturing, and switching to solar and wind energy to either power electric arc furnaces, or produce hydrogen as a cleaner fuel. History Steelmaking has played a crucial role in the development of ancient, medieval, and modern technological societies. Early processes of steel making were made during the classical era in Ancient Iran, Ancient China, India, and Rome. Cast iron is a hard, brittle material that is difficult to work, whereas steel is malleable, relatively easily formed and a versatile material. For much of human history, steel has only been made in small quantities. Since the invention of the Bessemer process in 19th century Britain and subsequent technological developments in injection technology and process control, mass production of steel has become an integral part of the global economy and a key indicator of modern technological development. The earliest means of producing steel was in a bloomery. Early modern methods of producing steel were often labour-intensive and highly skilled arts. See: finery forge, in which the German finery process could be managed to produce steel. blister steel and crucible steel.An important aspect of the Industrial Revolution was the development of large-scale methods of producing forgeable metal (bar iron or steel). The puddling furnace was initially a means of producing wrought iron but was later applied to steel production. The real revolution in modern steelmaking only began at the end of the 1850s when the Bessemer process became the first successful method of steelmaking in high quantity followed by the open-hearth furnace. Modern processes for manufacturing of steel Modern steelmaking processes can be divided into three steps: primary, secondary and tertiary. Primary steelmaking involves smelting iron into steel. Secondary steelmaking involves adding or removing other elements such as alloying agents and dissolved gases. Tertiary steelmaking involves casting into sheets, rolls or other forms. Multiple techniques are available for each step. Primary steelmaking Basic oxygen Basic oxygen steelmaking is a method of primary steelmaking in which carbon-rich pig iron is melted and converted into steel. Blowing oxygen through molten pig iron converts some of the carbon in the iron into CO− and CO2, turning it into steel. Refractories—calcium oxide and magnesium oxide—line the smelting vessel to withstand the high temperature and corrosive nature of the molten metal and slag. The chemistry of the process is controlled to ensure that impurities such as silicon and phosphorus are removed from the metal. The modern process was developed in 1948 by Robert Durrer, as a refinement of the Bessemer converter that replaced air with more efficient oxygen. It reduced the capital cost of the plants and smelting time, and increased labor productivity. Between 1920 and 2000, labour requirements in the industry decreased by a factor of 1000, to just 0.003 man-hours per tonne. in 2013, 70% of global steel output was produced using the basic oxygen furnace. Furnaces can convert up to 350 tons of iron into steel in less than 40 minutes compared to 10–12 hours in an open hearth furnace. Electric arc Electric arc furnace steelmaking is the manufacture of steel from scrap or direct reduced iron melted by electric arcs. In an electric arc furnace, a batch ("heat") of iron is loaded into the furnace, sometimes with a "hot heel" (molten steel from a previous heat). Gas burners may be used to assist with the melt. As in basic oxygen steelmaking, fluxes are also added to protect the lining of the vessel and help improve the removal of impurities. Electric arc furnace steelmaking typically uses furnaces of capacity around 100 tonnes that produce steel every 40 to 50 minutes. This process allows larger alloy additions than the basic oxygen method. HIsarna process In HIsarna ironmaking process, iron ore is processed almost directly into liquid iron or hot metal. The process is based around a type of blast furnace called a cyclone converter furnace, which makes it possible to skip the process of manufacturing pig iron pellets that is necessary for the basic oxygen steelmaking process. Without the necessity of this preparatory step, the HIsarna process is more energy-efficient and has a lower carbon footprint than traditional steelmaking processes. Hydrogen reduction Steel can be produced from direct-reduced iron, which in turn can be produced from iron ore as it undergoes chemical reduction with hydrogen. Renewable hydrogen allows steelmaking without the use of fossil fuels. In 2021, a pilot plant in Sweden tested this process. Direct reduction occurs at 1,500 °F (820 °C). The iron is infused with carbon (from coal) in an electric arc furnace. Hydrogen produced by electrolysis requires approximately 2600 kWh per ton of steel. Costs are estimated to be 20-30% higher than conventional methods. However, the cost of CO2-emissions add to the price of basic oxygen production, and a 2018 study of Science magazine estimates that the prices will break even when that price is €68 per tonne CO2, which is expected to be reached in the 2030s. Secondary steelmaking Secondary steelmaking is most commonly performed in ladles. Some of the operations performed in ladles include de-oxidation (or "killing"), vacuum degassing, alloy addition, inclusion removal, inclusion chemistry modification, de-sulphurisation, and homogenisation. It is now common to perform ladle metallurgical operations in gas-stirred ladles with electric arc heating in the lid of the furnace. Tight control of ladle metallurgy is associated with producing high grades of steel in which the tolerances in chemistry and consistency are narrow. Carbon dioxide emissions As of 2021, steelmaking is estimated to be responsible for around 11% of the global emissions of carbon dioxide and around 7% of the global greenhouse gas emissions. Making 1 ton of steel emits about 1.8 tons of carbon dioxide. The bulk of these emissions results from the industrial process in which coal is used as the source of carbon that removes oxygen from iron ore in the following chemical reaction, which occurs in a blast furnace:Fe2O3(s) + 3 CO(g) → 2 Fe(s) + 3 CO2(g) Additional carbon dioxide emissions result from mining, refining and shipping the ore used, basic oxygen steelmaking, calcination, and the hot blast. Carbon capture and utilization or carbon capture and storage are proposed techniques to reduce the carbon dioxide emissions in the steel industry and reduction of iron ore using green hydrogen rather than carbon. See below for further decarbonization strategies. Mining and extraction Coal and iron ore mining are very energy intensive, and result in numerous environmental damages, from pollution, to biodiversity loss, deforestation, and greenhouse gas emissions. Iron ore is shipped great distances to steel mills. Blast furnace To make pure steel, iron and carbon are needed. On its own, iron is not very strong, but a low concentration of carbon - less than 1 percent, depending on the kind of steel, gives the steel its important properties. The carbon in steel is obtained from coal and the iron from iron ore. However, iron ore is a mixture of iron and oxygen, and other trace elements. To make steel, the iron needs to be separated from the oxygen and a tiny amount of carbon needs to be added. Both are accomplished by melting the iron ore at a very high temperature (1,700 degrees Celsius or over 3,000 degrees Fahrenheit) in the presence of oxygen (from the air) and a type of coal called coke. At those temperatures, the iron ore releases its oxygen, which is carried away by the carbon from the coke in the form of carbon dioxide. Fe2O3(s) + 3 CO(g) → 2 Fe(s) + 3 CO2(g) The reaction occurs due to the lower (favorable) energy state of carbon dioxide compared to iron oxide, and the high temperatures are needed to achieve the activation energy for this reaction. A small amount of carbon bonds with the iron, forming pig iron, which is an intermediary before steel, as it has carbon content that is too high - around 4%. Decarburization To reduce the carbon content in pig iron and obtain the desired carbon content of steel, the pig iron is re-melted and oxygen is blown through in a process called basic oxygen steelmaking, which occurs in a ladle. In this step, the oxygen binds with the undesired carbon, carrying it away in the form of carbon dioxide gas, an additional source of emissions. After this step, the carbon content in the pig iron is lowered sufficiently and steel is obtained. Calcination Further carbon dioxide emissions result from the use of limestone, which is melted at high temperatures in a reaction called calcination, which has the following chemical reaction: CaCO3(s) → CaO(s) + CO2(g) Carbon dioxide is an additional source of emissions in this reaction. Modern industry has introduced calcium oxide (CaO, quicklime) as a replacement. It acts as a chemical flux, removing impurities (such as Sulfur or Phosphorus (e.g. apatite or fluorapatite)) in the form of slag and keeps emissions of CO2 low. For example, the calcium oxide can react to remove silicon oxide impurities: SiO2 + CaO → CaSiO3This use of limestone to provide a flux occurs both in the blast furnace (to obtain pig iron) and in the basic oxygen steel making (to obtain steel). Hot blast Further carbon dioxide emissions result from the hot blast, which is used to increase the heat of the blast furnace. The hot blast pumps hot air into the blast furnace where the iron ore is reduced to pig iron, helping to achieve the high activation energy. The hot blast temperature can be from 900 °C to 1300 °C (1600 °F to 2300 °F) depending on the stove design and condition. Oil, tar, natural gas, powdered coal and oxygen can also be injected into the furnace to combine with the coke to release additional energy and increase the percentage of reducing gases present, increasing productivity. If the air in the hot blast is heated by burning fossil fuels, which often is the case, this is an additional source of carbon dioxide emissions. Strategies for reducing carbon emissions There are several carbon abatement and decarbonization strategies in the steelmaking industry, depending on the basic manufacturing process used, of which blast furnace/basic oxygen furnace (BF/BOF) is currently the dominant process. Options fall in to three general categories: switching the energy source from fossil fuels to wind and solar, increasing the efficiency of processing, and innovative new technological processes. Most of the latter are still in speculative or experimental stages. Switching to sustainable energy sources CO2 emissions vary according to energy sources. When sustainable energy such as wind or solar are used to power the process, in electric arc furnaces, or create hydrogen as a fuel, emissions can be reduced dramatically. European projects from HYBRIT, LKAB, Voestalpine, and ThyssenKrupp are pursuing this strategy. Top gas recovery in BF/BOF Top gas from the blast furnace is the gas that is normally exhausted into the air during steelmaking. This gas contains CO2 and is also rich in the reducing agents of H2 and CO. The top gas can be captured, the CO2 removed, and the reducing agents reinjected into the blast furnace. One study claims this process can reduce BF CO2 emissions by 75%, another study states that the emissions are reduced by 56.5% with the carbon capture and storage and reduced by 26.2% if only the recycling of the reducing agents is used. To keep the carbon captured from entering the atmosphere, a method of storing it or using it would have to be found. Another way to use the top gas would be in a top recovery turbine which then generates electricity, which could be used to reduce the energy intensity of the process, if electric arc smelting is used. Carbon could also be captured from gasses in the coke oven. Currently, separating the CO2 from other gasses and components in the system, and the high cost of the equipment and infrastructure changes needed, have kept this strategy minimal, but the potential for emission reduction has been estimated to be up to 65% to 80%. Scrap-use in BF/BOF Scrap in steelmaking refers to steel that has either reached its end of life use or was generated during the manufacture of steel components. Steel is easy to separate and recycle due to its inherent magnetism and using scrap avoids the emissions of 1.5 tons of CO2 for every ton of scrap used. Currently, steel recycling is high, with all the scrap being collected also being recycled in the steel industry. H2 enrichment in BF/BOF In the blast furnace, the iron oxides are reduced by a combination of CO, H2, and carbon. Only around 10% of the iron oxides are reduced by H2. With H2 enrichment processing, the proportion of iron oxides reduced by H2 is increased, so that less carbon is consumed and less CO2 is emitted. This process can reduce emissions by an estimated 20%. The HIsarna process The HIsarna ironmaking process was described above as a way of producing iron in a cyclone converter furnace without the pre-processing steps of choking/agglomeration, which reduces the CO2 emissions by around 20%. Hydrogen plasma One speculative idea is and ongoing project by SuSteel to develop a hydrogen plasma technology that reduces the oxides with hydrogen, as opposed to with CO or carbon, and melts the iron at high operating temperatures. This project is still at the developmental stage. Iron ore electrolysis Another developing possible technology is iron ore electrolysis, where the reducing agent is simply electrons as opposed to H2, CO, or carbon. One method for this is molten oxide electrolysis. Here, the cell consists of an inert anode, a liquid oxide electrolyte (CaO, MgO, etc.), and the molten steel. When heated, the iron ore is reduced to iron and oxygen. Boston Metal is at the semi-industrial stage for this process, with plans to reach commercialization by 2026. Expanding a pilot plant in Woburn, Massachusetts, and building a production facility in Brazil, it was founded by MIT professors Donald Sadoway and Antoine Allanore.A research project which involved the steel company ArcelorMittal tested a different type of iron ore electrolysis process in a pilot project called Siderwin. It operates on relatively low temperatures (around 110°C), while the Boston Metal process operates on high temperatures (~1.600°C). ArcelorMittal is currently investigating whether the company wants scale up the technology and build a larger plant, and expects an investment decision by 2025. Using biomass in BF/BOF In steelmaking, coal and coke are used for fuel and iron reduction. Biomass such as charcoal or wood pellets are a potential alternative fuel, but this does not actually reduce emissions, as the burning biomass still emits carbon, it merely provides a "carbon offset", where emissions are "traded" against the sequestration of the source biomass, "ofsetting" emissions by 5% to 28% of current CO2 values.Offsetting has a very low reputation globally, as cutting down the trees to create the pellets or charcoal does not sequester carbon, it interrupts the natural sequestration the tree was providing. Offsetting is not reduction. Outlook Overall, there are a number of innovative methods to reduce CO2 emissions within the steelmaking industry. Some of these, such as top gas recovery and using hydrogen reduction in DRI/EAF are highly feasible with current infrastructure and technology levels. Others, such as hydrogen plasma and iron ore electrolysis are still in the research or semi-industrial stage. Despite these efforts emissions from steel making are not falling in 2023. See also Argon oxygen decarburization Basic oxygen steelmaking Blast furnace Calcination Carbon additive Decarburization FINEX Flodin process History of the steel industry (1850–1970) History of the steel industry (1970–present) Metallurgical coal Steel mill References External links The short film The Drama of Steel (1946) is available for free viewing and download at the Internet Archive. U.S. Steel Gary Works Photograph Collection, 1906–1971 '"Steel for the Tools for Victory", Popular Science (December 1943) large detailed article with numerous illustrations and cutaways on the modern basics of making steel
redd and redd+
REDD+ (or REDD-plus) is a framework to encourage developing countries to reduce emissions and enhance removals of greenhouse gases through a variety of forest management options, and to provide technical and financial support for these efforts. The acronym refers to "reducing emissions from deforestation and forest degradation in developing countries, and the role of conservation, sustainable management of forests, and enhancement of forest carbon stocks in developing countries". REDD+ is a voluntary climate change mitigation framework developed by the United Nations Framework Convention on Climate Change (UNFCCC). REDD originally referred to "reducing emissions from deforestation in developing countries", which was the title of the original document on REDD. It was superseded by REDD+ in the Warsaw Framework on REDD-plus negotiations.Since 2000, various studies estimate that land use change, including deforestation and forest degradation, accounts for 12-29% of global greenhouse gas emissions. For this reason the inclusion of reducing emissions from land use change is considered essential to achieve the objectives of the UNFCCC. Main elements of REDD+ As with other approaches under the UNFCCC, there are few prescriptions that specifically mandate how to implement the mechanism at national level; the principles of national sovereignty and subsidiarity imply that the UNFCCC can only provide guidelines for implementation, and require that reports are submitted in a certain format and open for review by the convention. There are certain aspects that go beyond this basic philosophy – such as the 'safeguards', explained in more detail below – but in essence REDD+ is no more than a set of guidelines on how to report on forest resources and forest management strategies and their results in terms of reducing emissions and enhancing removals of greenhouse gases. However, a set of requirements has been elaborated to ensure that REDD+ programs contain key elements and that reports from Parties are consistent and comparable and that their content are open to review and in function of the objectives of the convention.Decision 1/CP.16 requests all developing countries aiming to undertake REDD+ to develop the following elements: (a) A national strategy or action plan; (b) A national forest reference emission level and/or forest reference level or, if appropriate, as an interim measure, subnational forest reference emission levels and/or forest reference levels; (c) A robust and transparent national forest monitoring system for the monitoring and reporting on REDD+ activities (see below), with, if appropriate, subnational monitoring and reporting as an interim measure; (d) A system for providing information on how the social and environmental safeguards (included in an appendix to the decision) are being addressed and respected throughout the implementation of REDD+.It further requests developing countries, when developing and implementing their national REDD+ strategies or action plans, to address, among other issues, the drivers of deforestation and forest degradation, land tenure issues, forest governance issues, gender considerations and the social and environmental safeguards, ensuring the full and effective participation of relevant stakeholders, inter alia indigenous peoples and local communities. Eligible activities The decisions on REDD+ enumerate five "eligible activities" that developing countries may implement to reduce emissions and enhance removals of greenhouse gases: (a) Reducing emissions from deforestation. (b) Reducing emissions from forest degradation. (c) Conservation of forest carbon stocks. (d) Sustainable management of forests. (e) Enhancement of forest carbon stocks.The first two activities reduce emissions of greenhouse gases and they are the two activities listed in the original submission on REDD in 2005 by the Coalition for Rainforest Nations. The three remaining activities constitute the "+" in REDD+. The last one enhances removals of greenhouse gases, while the effect of the other two on emissions or removals is indeterminate but expected to be minimal. Policies and measures In the text of the convention repeated reference is made to national "policies and measures", the set of legal, regulatory and administrative instruments that parties develop and implement to achieve the objective of the convention. These policies can be specific to climate change mitigation or adaptation, or of a more generic nature but with an impact on greenhouse gas emissions. Many of the signatory parties to the UNFCCC have by now established climate change strategies and response measures. The REDD+ approach has a similar, more focused set of policies and measures. Forest sector laws and procedures are typically in place in most countries. In addition, countries have to develop specific national strategies and/or action plans for REDD+. Of specific interest to REDD+ are the drivers of deforestation and forest degradation. The UNFCCC decisions call on countries to make an assessment of these drivers and to base the policies and measures on this assessment, such that the policies and measures can be directed to where the impact is greatest. Some of the drivers will be generic – in the sense that they are prevalent in many countries, such as increasing population pressure – while others will be very specific to countries or regions within countries. Countries are encouraged to identify "national circumstances" that impact the drivers: specific conditions within the country that impact the forest resources. Hints for typical national circumstances can be found in preambles to various COP decisions, such as "Reaffirming that economic and social development and poverty eradication are global priorities" in the Bali Action Plan, enabling developing countries to prioritize policies like poverty eradication through agricultural expansion or hydropower development over forest protection. Reference levels Reference levels are a key component for any national REDD+ program. They serve as a baseline for measuring the success of REDD+ programs in reducing greenhouse gas emissions from forests. They are available for examination by the international community to assess the reported emission reductions or enhanced removals. It establishes the confidence of the international community in the national REDD+ program. The results measured against these baselines may be eligible for results-based payments. Setting the reference levels too lax will erode the confidence in the national REDD+ program, while setting them too strict will erode the potential to earn the benefits with which to operate the national REDD+ program. Careful consideration of all relevant information is therefore of crucial importance.The requirements and characteristics of reference levels are under the purview of the UNFCCC. Given the wide variety in ecological conditions and country-specific circumstances, these requirements are rather global and every country will have a range of options in its definition of reference levels within its territory.A reference level (RL) is expressed as an amount, derived by differencing a sequence of amounts over a period of time. For REDD+ purposes the amount is expressed in CO2-equivalents (CO2e) (see article on global warming potential) of emissions or removals per year. If the amounts are emissions, the reference level becomes a reference emission level (REL); however these RELs are seen by some as incomplete as they do not take into account removals. Reference levels are based on a scope ‒ what is included? ‒ a scale ‒ the geographical area from which it is derived or to which it is applied ‒ and a period over which the reference level is calculated. The scope, the scale and the period can be modified in reference to national circumstances: specific conditions in the country that would call for an adjustment of the basis from which the reference levels are constructed. A reference level can be based on observations or measurements of amounts in the past, in which case it is retrospective, or it can be an expectation or projection of amounts into the future, in which case it is prospective.Reference levels have to eventually have national coverage, but they may be composed from a number of sub-national reference levels. As an example, forest degradation may have a reference emission level for commercial selective logging and one for extraction of minor timber and firewood for subsistence use by rural communities. Effectively, every identified driver of deforestation or forest degradation has to be represented in one or more reference emission level(s). Similarly for reference levels for enhancement of carbon stocks, there may be a reference level for plantation timber species and one for natural regeneration, possibly stratified by ecological region or forest type.Details on the reporting and technical assessment of reference levels are given in Decision 13/CP.19. Monitoring: measurement, reporting and verification In Decision 2/CP.15 of the UNFCCC countries are requested to develop national forest monitoring systems (NFMS) that support the functions of measurement, reporting and verification (MRV) of actions and achievements of the implementation of REDD+ activities. NFMS is the key component in the management of information for national REDD+ programs. A fully functional monitoring system can go beyond the requirements posted by the UNFCCC to include issues such as a registry of projects and participants, and evaluation of program achievements and policy effectiveness. It may be purpose-built, but it may also be integrated into existing forest monitoring tools.Measurements are suggested to be made using a combination of remote sensing and ground-based observations. Remote sensing is particularly suited to the assessment of areas of forest and stratification of different forest types. Ground-based observations involve forest surveys to measure the carbon pools used by the Intergovernmental Panel on Climate Change (IPCC), the United Nations body for assessing the science related to climate change, as well as other parameters of interest such as those related to safeguards and eligible activity implementation.The reporting has to follow the IPCC guidance, in particular the "Good Practice Guidance for Land use, land-use change, and forestry". This provides reporting templates to be included in National Communications of Parties to the UNFCCC. Also included in the guidance are standard measurements protocols and analysis procedures that greatly impact the measurement systems that countries need to establish. The actual reporting of REDD+ results goes through the Biennial Update Reports (BURs), instead of the National Communications of Parties. The technical assessment of these results is an independent, external process that is managed by the Secretariat to the UNFCCC; countries need to facilitate the requirements of this assessment. The technical assessment is included within the broader process of International Consultation and Analysis (ICA), which is effectively a peer-review by a team composed of an expert from an Annex I Party and an expert from a non-Annex I Party which "will be conducted in a manner that is nonintrusive, non-punitive and respectful of national sovereignty". This "technical team of experts shall analyse the extent to which: (a) There is consistency in methodologies, definitions, comprehensiveness and the information provided between the assessed reference level and the results of the implementation of the [REDD+] activities (...); (b) The data and information provided in the technical annex is transparent, consistent, complete and accurate; (c) The data and information provided in the technical annex is consistent with the [UNFCCC] guidelines (...); (d) The results are accurate, to the extent possible."However, unlike a true verification the technical assessment cannot "approve" or "reject" the reference level, or the reported results measured against this reference level. It does provide clarity on potential areas for improvement. Financing entities that seek to provide results-based payments (payments per tonne of mitigation achieved) typically seek a true verification of results by external experts, to provide assurance that the results for which they are paying are credible. Safeguards In response to concerns over the potential for negative consequences resulting from the implementation of REDD+ the UNFCCC established a list of safeguards that countries need to "address and respect" and "promote and support" in order to guarantee the correct and lasting generation of results from the REDD+ mechanism. These safeguards are: "(a) That actions complement or are consistent with the objectives of national forest programmes and relevant international conventions and agreements; (b) Transparent and effective national forest governance structures, taking into account national legislation and sovereignty; (c) Respect for the knowledge and rights of indigenous peoples and members of local communities, by taking into account relevant international obligations, national circumstances and laws, and noting that the United Nations General Assembly has adopted the United Nations Declaration on the Rights of Indigenous Peoples; (d) The full and effective participation of relevant stakeholders, in particular indigenous peoples and local communities; (e) That actions are consistent with the conservation of natural forests and biological diversity, ensuring that the actions are not used for the conversion of natural forests, but are instead used to incentivize the protection and conservation of natural forests and their ecosystem services, and to enhance other social and environmental benefits; (f) Actions to address the risks of reversals; (g) Actions to reduce displacement of emissions".Countries have to regularly provide a summary of information on how these safeguards are addressed and respected. This could come in the form, for instance, of explaining the legal and regulatory environment with regards to the recognition, inclusion and engagement of Indigenous Peoples, and information on how these requirements have been implemented.Decision 12/CP.19 established that the "summary of information" on the safeguards will be provided in the National Communications to the UNFCCC, which for developing country Parties will be once every four years. Additionally, and on a voluntary basis, the summary of information may be posted on the UNFCCC REDD+ web platform. Additional issues All pertinent issues that comprise REDD+ are exclusively those that are included in the decisions of the COP, as indicated in the above sections. There is, however, a large variety of concepts and approaches that are labelled (as being part of) REDD+ by their proponents, either being a substitute for UNFCCC decisions or complementary to those decisions. Below follows a – no doubt incomplete – list of such concepts and approaches. Project-based REDD+, voluntary market REDD+. As the concept of REDD+ was being defined, many organizations began promoting REDD+ projects at the scale of a forest area (e.g. large concession, National Park), analogous to AR-CDM projects under the Kyoto Protocol, with reduction of emissions or enhancement of removals vetted by an external organization using a standard established by some party (e.g. CCBA, VCS) and with carbon credits traded on the international voluntary carbon market. However, under the UNFCCC REDD+ is defined as national (Decisions 4/CP.15 and 1/CP.16 consistently refer to national strategies and action plans and national monitoring, with sub-national coverage allowed as an interim measure only). Benefit distribution. The UNFCCC decisions on REDD+ are silent on the issue of rewarding countries and participants for their verified net emission reductions or enhanced removals of greenhouse gases. It is not very likely that specific requirements for sub-national implementation of the distribution of benefits will be adopted, as this will be perceived to be an issue of national sovereignty. Generic guidance may be provided, using language similar to that of the safeguards, such as "result-based finance has to accrue to local stakeholders" without being specific on percentages retention for management, identification of stakeholders, type of benefit or means of distribution. Countries may decide to channel any benefits through an existing program on rural development, for instance, provide additional services (e.g. extension, better market access, training, seedlings) or pay local stakeholders directly. Many financial entities do have specific requirement on the design of a system to use funds received, and reporting on the use of these funds. FPIC. Free, prior and informed consent is included in the U.N. Declaration on the Rights of Indigenous Peoples. The REDD+ decisions under the UNFCCC do not have this as an explicit requirement; however, the safeguard on respect for the knowledge and rights of indigenous peoples and members of local communities notes "that the United Nations General Assembly has adopted the United Nations Declaration on the Rights of Indigenous Peoples" (UNDRIP). Article 19 of UNDRIP requires that "States shall consult and cooperate in good faith with the indigenous peoples concerned through their own representative institutions in order to obtain their free, prior and informed consent before adopting and implementing legislative or administrative measures that may affect them". This article is interpreted by many organizations engaged in REDD+, for example in the UN-REDD "Guidelines on Free, Prior and Informed Consent," to mean that every, or at least many, communities need to provide their consent before any REDD+ activities can take place. Leakage refers to detrimental effects outside of the project area attributable to project activities. Leakage is less of an issue when REDD+ is implemented at a national or subnational level, as there can be no domestic leakage once full national coverage is achieved. However, there can still be international leakage if activities are displaced across international borders, or "displacement of emissions" between sectors, such as replacing wood fires with kerosene stoves (AFOLU to energy) or construction with wood for construction with concrete, cement and bricks (AFOLU to industry). Many initiatives require leakage be taken into account in program design, so that potential leakage of emissions, including across borders, can be minimized. REDD+ as a climate change mitigation measure Deforestation and forest degradation account for 17–29% of global greenhouse gas emissions, the reduction of which is estimated to be one of the most cost-efficient climate change mitigation strategies. Regeneration of forest on degraded or deforested lands can remove CO₂ from the atmosphere through the build-up of biomass, making forest lands a sink of greenhouse gases. The REDD+ mechanism addresses both issues of emission reduction and enhanced removal of greenhouse gases. Reducing emissions Emissions of greenhouse gases from forest land can be reduced by slowing down the rates of deforestation and forest degradation, covered by REDD+ eligible activities. Another option would be some form of reduced impact logging in commercial logging, under the REDD+ eligible activity of sustainable management of forests. Enhancing removals Removals of greenhouse gases (specifically CO₂) from the atmosphere can be achieved through various forest management options, such as replanting degraded or deforested areas or enrichment planting, but also by letting forest land regenerate naturally. Care must be taken to differentiate between what is a purely ecological process of regrowth and what is induced or enhanced through some management intervention. REDD+ and the carbon market In 2009, at COP 15 in Copenhagen, the Copenhagen Accord was reached, noting in section 6 the recognition of the crucial role of REDD and REDD+ and the need to provide positive incentives for such actions by enabling the mobilization of financial resources from developed countries. The Accord goes on to note in section 8 that the collective commitment by developed countries for new and additional resources, including forestry and investments through international institutions, will approach US$30 billion for the period 2010–2012.The Green Climate Fund (GCF) was established at COP 17 to function as the financial mechanism for the UNFCCC, thereby including REDD+ finance. The Warsaw Framework on REDD-plus makes various references to the GCF, instructing developing country Parties to apply to the GCF for result-based finance. The GCF currently finances REDD+ programs in phase 1 (design of national strategies or action plans, capacity building) and phase 2 (implementation of national strategies or action plans, demonstration programs). It is currently finalizing an approach to REDD+ results-based payments. REDD+ is also eligible for inclusion under CORSIA, the International Civil Aviation Organization (ICAO)'s market-based greenhouse gas offset mechanism. Implementing REDD+ Decision 1/CP.16, paragraph 73, suggests that national capacity for implementing REDD+ is built up in phases, "beginning with the development of national strategies or action plans, policies and measures, and capacity-building, followed by the implementation of national policies and measures and national strategies or action plans that could involve further capacity-building, technology development and transfer and results-based demonstration activities, and evolving into results-based actions that should be fully measured, reported and verified". The initial phase of the development of national strategies and action plans and capacity building is typically referred to as the "Readiness phase" (a term like Reddiness is also encountered). There is a very substantial number of REDD+ projects globally and this section lists only a selection. One of the more comprehensive online tools with up-to-date information on REDD+ projects is the Voluntary REDD+ Database. Readiness activities Most REDD+ activities or projects implemented since the call for demonstration activities in Decision 2/CP.13 December 2007 are focused on readiness, which is not surprising given that REDD+ and its requirements were completely new to all developing countries. UN-REDD Programme UNDP, UNEP and FAO jointly established the UN-REDD Programme (see below #UN-REDD Programme) in 2007, a partnership aimed at assisting developing countries in addressing certain measures needed in order to effectively participate in the REDD+ mechanism. These measures include capacity development, governance, engagement of Indigenous Peoples and technical needs. The initial set of supported countries were Bolivia, Democratic Republic of Congo, Indonesia, Panama, Papua New Guinea, Paraguay, Tanzania, Vietnam, and Zambia. By March 2014 the Programme counted 49 participants, 18 of which are receiving financial support to kick start or complement a variety of national REDD+ readiness activities. The other 31 partner countries may receive targeted support and knowledge sharing, be invited to attend meetings and training workshops, have observer status at the Policy Board meetings, and "may be invited to submit a request to receive funding for a National Programme in the future, if selected through a set of criteria to prioritize funding for new countries approved by the Policy Board". The Programme operates in six work areas:MRV and Monitoring (led by FAO) National REDD+ Governance (UNDP) Engagement of Indigenous Peoples, Local Communities and Other Relevant Stakeholders (UNDP) Ensuring multiple benefits of forests and REDD+ (UNEP) Transparent, Equitable and Accountable Management of REDD+ Payments (UNDP) REDD+ as a Catalyst for Transformations to a Green Economy (UNEP)Forest Carbon Partnership Facility The World Bank plays an important role in the development of REDD+ activities since its inception. The Forest Carbon Partnership Facility (FCPF) was presented to the international community at COP 13 in Bali, December 2007. Recipient countries can apply $3.6 million towards: the development of national strategies; stakeholder consultation; capacity building; development of reference levels; development of a national forest monitoring system; and social and environmental safeguards analysis. Those countries that successfully achieve a state of readiness can apply to the related Carbon Fund, for support towards national implementation of REDD+. Norwegian International Climate and Forest Initiative At the 2007 Bali Conference, the Norwegian government announced their International Climate and Forests Initiative (NICFI), which provided US$1 billion towards the Brazilian REDD scheme and US$500 million towards the creation and implementation of national-based, REDD+ activities in Tanzania. In addition, with the United Kingdom, $200 million was contributed towards the Congo Basin Forest Fund to aid forest conservation activities in Central Africa. In 2010, Norway signed a Letter of Intent with Indonesia to provide the latter country with up to US$1 billion "assuming that Indonesia achieves good results". "United States" The United States has provided more than $1.5 billion in support for REDD+ and other sustainable landscape activities since 2010. It supports several multilateral partnerships including the FCPF, as well as flagship global programs such as SilvaCarbon, which provides support to REDD+ countries in measuring and monitoring forests and forest-related emissions. The United States also provides significant regional and bilateral support to numerous countries implementing REDD+. ITTO The International Tropical Timber Organization (ITTO) has launched a thematic program on REDD+ and environmental services with an initial funding of US$3.5 million from Norway. In addition, the 45th session of the ITTO Council held in November 2009, recommended that efforts relating REDD+ should focus on promoting "sustainable forest management". Finland In 2009, the Government of Finland and the Food and Agriculture Organization of the United Nations signed a US$17 million partnership agreement to provide tools and methods for multi-purpose forest inventories, REDD+ monitoring and climate change adaptation in five pilot countries: Ecuador, Peru, Tanzania, Viet Nam and Zambia. As part of this programme, the Government of Tanzania will soon complete the country's first comprehensive forest inventory to assess its forest resources including the size of the carbon stock stored within its forests. A forest soil carbon monitoring program to estimate soil carbon stock, using both survey and modelling-based methods, has also been undertaken. Australia Australia established a A$200 million International Forest Carbon Initiative, focused on developing REDD+ activities in its vicinity, i.e., in areas like Indonesia, and Papua New Guinea. Interim REDD+ Partnership In 2010, national governments of developing and developed countries joined efforts to create the Interim REDD+ Partnership as means to enhance implementation of early action and foster fast start finance for REDD+ actions. Implementation phase Some countries are already implementing aspects of a national forest monitoring system and activities aimed at reducing emissions and enhancing removals that go beyond REDD+ readiness. For example, the Forest Carbon Partnership Facility has 19 countries in the pipeline of the Carbon Fund, which will provide payments to these countries based on verified REDD+ emissions reductions achieved under national or subnational programs. Results-based actions Following the Warsaw Framework on REDD-plus, the first country had submitted a Biennial Update Report with a Technical Annex containing the details on emission reductions from REDD+ eligible activities. Brazil submitted its first Biennial Update Report on 31 December 2014. The Technical Annex covers the Amazon biome within Brazil's territory, a little under half of the national territory, reporting emission reductions against Brazil's previously submitted reference emission level of 2,971.02 MtCO2e from a reduction in deforestation. This Technical Annex was reviewed through the International Consultation and Analysis process and on 22 September 2015 a technical report was issued by the UNFCCC which states that "the LULUCF experts consider that the data and information provided in the technical annex are transparent, consistent, complete and accurate" (paragraph 38). (a) Continuation in updating and improving the carbon density map, including through the use of improved ground data from Brazil's first national forest inventory, possibly prioritizing geographic areas where deforestation is more likely to occur; (b) Expansion of the coverage of carbon pools, including improving the understanding of soil carbon dynamics after the conversion of forests to non-forests; (c) Consideration of the treatment of non-CO2 gases to maintain consistency with the GHG inventory; (d) Continuation of the improvements related to monitoring of forest degradation; (e) Expansion of the forest monitoring system to cover additional biomes. Criticisms Since the first discussion on REDD+ in 2005, and particularly at COP 13 in 2007 and COP 15 in 2009, many concerns have been voiced on aspects of REDD+. Though it is widely understood that REDD+ will need to undergo full-scale implementation in all non-Annex I countries to meet the objectives of the Paris Agreement, many challenges need resolving before this can happen.One of the largest issues is how reduction of emissions and the removal of greenhouse gases will be monitored consistently on a large scale, across a number of countries, each with separate environmental agencies and laws. Other issues relate to the conflict between the REDD+ approach and existing national development strategies, the participation of forest communities and indigenous peoples in the design and maintenance of REDD+, funding for the countries implementing REDD+, and the consistent monitoring of forest resources to detect permanence of the forest resources that have been reported by countries under the REDD+ mechanism. Natural forests vs. high-density plantations Safeguard (e): That actions are consistent with the conservation of natural forests and biological diversity, ensuring that the [REDD+] actions … are not used for the conversion of natural forests, but are instead used to incentivize the protection and conservation of natural forests and their ecosystem services, and to enhance other social and environmental benefits. Footnote to this safeguard: Taking into account the need for sustainable livelihoods of indigenous peoples and local communities and their interdependence on forests in most countries, reflected in the United Nations Declaration on the Rights of Indigenous Peoples, as well as the International Mother Earth Day.The UNFCCC does not define what constitutes a forest; it only requires that Parties communicate to the UNFCCC on how they define a forest. The UNFCCC does suggest using a definition in terms of minimal area, minimal crown coverage and minimal height at maturity of perennial vegetation.While there is a safeguard against the conversion of natural forest, developing country Parties are free to include plantations of commercial tree species (including exotics like Eucalyptus spp., Pinus spp., Acacia spp.), agricultural tree crops (e.g. rubber, mango, cocoa, citrus), or even non-tree species such as palms (oil palm, coconut, dates) and bamboo (a grass). Some opponents of REDD+ argue that this lack of a clear distinction is no accident. FAO forest definitions date from 1948 and define forest only by the number, height, and canopy cover of trees in an area.Similarly, there is a lack of a consistent definition for forest degradation.A national REDD+ strategy need not refer solely to the establishment of national parks or protected areas; by the careful design of rules and guidelines, REDD+ could include land use practices such as shifting cultivation by indigenous communities and reduced-impact-logging, provided sustainable rotation and harvesting cycles can be demonstrated. Some argue that this is opening the door to logging operations in primary forests, displacement of local populations for "conservation", increase of tree plantations. Achieving multiple benefits, for example the conservation of biodiversity and ecosystem services (such as drainage basins), and social benefits (for example income and improved forest governance) is currently not addressed, beyond the inclusion in the safeguard. Land tenure, carbon rights and benefit distribution According to some critics, REDD+ is another extension of green capitalism, subjecting the forests and its inhabitants to new ways of expropriation and enclosure at the hands of polluting companies and market speculators. So-called "carbon cowboys" – unscrupulous entrepreneurs who attempt to acquire rights to carbon in rainforest for small-scale projects – have signed on indigenous communities to unfair contracts, often with a view to on-selling the rights to investors for a quick profit. In 2012 an Australian businessman operating in Peru was revealed to have signed 200-year contracts with an Amazon tribe, the Yagua, many members of which are illiterate, giving him a 50 per cent share in their carbon resources. The contracts allow him to establish and control timber projects and palm oil plantations in Yagua rainforest. This risk is largely negated by the focus on national and subnational REDD+ programs, and by government ownership of these initiatives. There are risks that the local inhabitants and the communities that live in the forests will be bypassed and that they will not be consulted and so they will not actually receive any revenues. Fair distribution of REDD+ benefits will not be achieved without a prior reform in forest governance and more secure tenure systems in many countries.The UNFCCC has repeatedly called for full and effective participation of Indigenous Peoples and local communities without becoming any more specific. The ability of local communities to effectively contribute to REDD+ field activities and the measurement of forest properties for estimating reduced emissions and enhanced emissions of greenhouse gases has been clearly demonstrated in various countries.In some project-based REDD+, disreputable companies have taken advantage of low governance. Indigenous peoples Safeguard (c): Respect for the knowledge and rights of indigenous peoples and members of local communities, by taking into account relevant international obligations, national circumstances and laws, and noting that the United Nations General Assembly has adopted the United Nations Declaration on the Rights of Indigenous Peoples; Safeguard (d): The full and effective participation of relevant stakeholders, in particular indigenous peoples and local communities, in the [REDD+] actions … [and when developing and implementing national strategies or action plans]; Indigenous peoples are important stakeholders in REDD+ as they typically live inside forest areas or have their livelihoods (partially) based on exploitation of forest resources. The International Indigenous Peoples Forum on Climate Change (IIPFCC) was explicit at the Bali climate negotiations in 2007: REDD/REDD+ will not benefit Indigenous Peoples, but in fact will result in more violations of Indigenous Peoples' rights. It will increase the violation of our human rights, our rights to our lands, territories and resources, steal our land, cause forced evictions, prevent access and threaten indigenous agricultural practices, destroy biodiversity and cultural diversity and cause social conflicts. Under REDD/REDD+, states and carbon traders will take more control over our forests. Some claim putting a commercial value on forests neglects the spiritual value they hold for Indigenous Peoples and local communities.Indigenous Peoples protested in 2008 against the United Nations Permanent Forum on Indigenous Issues final report on climate change and a paragraph that endorsed REDD+; this was captured in a video entitled "the 2nd May Revolt". However, these protests have largely disappeared in recent years. Indigenous people sit as permanent representatives on many multinational and national REDD+ bodies. Indigenous Peoples' groups in Panama broke off their collaboration with the national UN-REDD Programme in 2012 over allegations of a failure of the government to properly respect the rights of the indigenous groups. Some grassroots organizations are working to develop REDD+ activities with communities and developing benefit-sharing mechanisms to ensure REDD+ funds reach rural communities as well as governments. Examples of these include Plan Vivo projects in Mexico, Mozambique and Cameroon; and Carbonfund.org Foundation's VCS and CCBS projects in the state of Acre, Brazil. REDD+ in the carbon market When REDD+ was first discussed by the UNFCCC, no indication was given of the positive incentives that would support developing countries in their efforts to implement REDD+ to reduce emissions and enhance removals of greenhouse gases from forests. In the absence of guidance from the COP, two options were debated by the international community at large: a market-based approach; a fund-based approach where Annex I countries would deposit substantial amounts of money into a fund administered by some multi-lateral entity.Under the market-based approach, REDD+ would act as an "offset scheme" in which verified results-based actions translate into some form of carbon credits, more-or-less analogous to the market for Certified Emission Reductions (CER) under the CDM of the Kyoto Protocol. Such carbon credits could then offset emissions in the country or company of the buyer of the carbon credits. This would require Annex I countries to agree to deeper cuts in emissions of greenhouse gases in order to create a market for the carbon credits from REDD+, which is unlikely to happen soon given the current state of negotiations in the COP, but even then there is the fear that the market will be flooded with carbon credits, depressing the price to levels where REDD+ is no longer an economically viable option. Some developing countries, such as Brazil and China, maintain that developed countries must commit to real emissions reductions, independent of any offset mechanism.Since COP 17, however, it has become clear that the REDD+ may be financed by a variety of sources, market and non-market. The newly established Green Climate Fund already is supporting phase 1 and 2 REDD+ programs, and is finalizing rules to allow disbursement of result-based finance to developing countries that submit verified reports of emission reductions and enhanced removals of greenhouse gases. Top-down design by large international institutions vs. bottom-up grassroots coalitions While the COP decisions emphasize national ownership and stakeholder consultation, there are concerns that some of the larger institutional organizations are driving the process, in particular outside of the one Party, one vote realm of multi-lateral negotiations under the UNFCCC. For example, the World Bank and the UN-REDD Programme, the two largest sources of funding and technical assistance for readiness activities and therefore unavoidable for most developing countries, place requirements upon recipient countries that are arguably not mandated or required by the COP decisions. A body of research suggests that, at least as of 2016, REDD+ as a global architecture has only had a limited effect on local political realities, as pre-existing entrenched power dynamics and incentives that promote deforestation are not easily changed by the relatively small sums of money that REDD+ has delivered to date. In addition, issues like land tenure that fundamentally determine who makes decisions about land use and deforestation have not been adequately addressed by REDD+, and there is no clear consensus on how complex political issues like land tenure can be easily resolved to favor standing forests over cleared forests through a relatively top-down mechanism like REDD+.While a single, harmonized, global system that accounts for and rewards emissions reductions from forests and land use has been elusive, diverse context-specific projects have emerged that support a variety of activities including community-based forest management, enforcement of protected areas, sustainable charcoal production, and agroforestry. Although it is not clear whether these diverse projects are genuinely different from older integrated conservation and development initiatives that pre-date REDD+, there is evidence that REDD+ has altered global policy conversations, possibly elevating issues like indigenous peoples' land rights to higher levels, or conversely threatening to bypass safeguards for indigenous rights. Debate surrounding these issues is ongoing.Although the World Bank declares its commitment to fight against climate change, many civil society organisations and grassroots movements around the world view with scepticism the processes being developed under the various carbon funds. Among some of the most worrying reasons are the weak (or inexistent) consultation processes with local communities; the lack of criteria to determine when a country is ready to implement REDD+ projects (readiness); the negative impacts such as deforestation and loss of biodiversity (due to fast agreements and lack of planning); the lack of safeguards to protect Indigenous Peoples' rights; and the lack of regional policies to stop deforestation. A growing coalition of civil society organization, social movement, and other actors critical of REDD+ emerged between 2008 and 2011, criticizing the mechanism on climate justice grounds. During the UN climate negotiations in Copenhagen (2009) and Cancun (2010) strong civil society and social movements coalitions formed a strong front to fight the World Bank out of the climate. However, this concern has largely died down as the World Bank initiatives have been more full developed, and some of these same actors are now participating in implementation of REDD+. ITTO has been criticized for appearing to support above all the inclusion of forest extraction inside REDD+ under the guise of "sustainable management" in order to benefit from carbon markets while maintaining business-as-usual. The UN-REDD Programme The United Nations Programme on Reducing Emissions from Deforestation and Forest Degradation (or UN-REDD Programme) is a multilateral body that partners with countries to help them establish the technical capacities to implement REDD+ (see below #Difference between REDD+ and the UN-REDD Programme). The overall development goal of the Programme is "to reduce forest emissions and enhance carbon stocks in forests while contributing to national sustainable development". The UN-REDD Programme supports nationally led REDD+ processes and promotes the informed and meaningful involvement of all stakeholders, including indigenous peoples and other forest-dependent communities, in national and international REDD+ implementation.The programme is a collaboration between FAO, UNDP and UNEP under which a trust fund established in July 2008 allows donors to pool resources to generate the requisite transfer flow of resources to significantly reduce global emissions from deforestation and forest degradation.The Programme has expanded steadily since its establishment and now has over 60 official Partner Countries spanning Africa, Asia-Pacific and Latin America-Caribbean.In addition to the UN-REDD Programme, other initiatives assisting countries that are engaged in REDD+ include the World Bank's Forest Carbon Partnership Facility, Norway's International Climate and Forest Initiative, the Global Environment Facility, Australia's International Forest Carbon Initiative, the Collaborative Partnership on Forests, and the Green Climate Fund. The UN-REDD Programme publicly releases each year an Annual Programme Progress Report and a Semi-Annual Report. Support to Partner Countries The UN-REDD Programme supports its Partner Countries through: Direct funding and technical support to the design and implementation of National REDD+ Programmes; Complementary tailored funding and technical support to national REDD+ actions; and Technical country capacity enhancing support through sharing of expertise, common approaches, analyses, methodologies, tools, data, best practices and facilitated South-South knowledge sharing. Governance The UN-REDD Programme is a collaborative programme of the Food and Agriculture Organization of the United Nations (FAO), the United Nations Development Programme (UNDP) and the United Nations Environment Programme (UNEP), created in 2008 in response to the UNFCCC decisions on the Bali Action Plan and REDD at COP 13.The UN-REDD Programme's 2016-2020 governance arrangements allow for the full and effective participation of all UN-REDD Programme stakeholders – partner countries, donors, indigenous peoples, civil society organizations, participating UN agencies – while ensuring streamlined decision-making processes and clear lines of accountability.The governance arrangements are built on and informed by five principles: inclusiveness, transparency, accountability, consensus-based decisions and participation.UN-REDD Programme 2016-2020 governance arrangements include: Executive Board The UN-REDD Programme Executive Board Archived 12 September 2021 at the Wayback Machine has general oversight for the Programme, taking decisions on the allocation of the UN-REDD Programme fund resources. It meets bi-annually, or more frequently as required to efficiently carry out its roles and responsibilities. Assembly The UN-REDD Programme Assembly Archived 15 May 2018 at the Wayback Machine is a broad multi-stakeholder forum with the role to foster consultation, dialogue and knowledge exchange among UN-REDD Programme stakeholders. National Steering Committees National Steering Committees facilitate strong country ownership and shared/common decision-making for National REDD+ Programmes, and include representatives of civil society and indigenous peoples. Each National Steering Committee provides oversight for National Programmes, addressing any delays, changes or reorientation of a programme and ensuring alignment with and delivery of results as expected and approved by the executive board. Multi-Party Trust Fund Office The Multi-Party Trust Fund Office provides real-time funding administration to the UN-REDD Programme. 2016-2020 Strategic Framework The work of the UN-REDD Programme is guided by its 2016-2020 Strategic Framework Archived 18 August 2022 at the Wayback Machine, with the goal to: Reduce forest emissions and enhance carbon stocks in forests while contributing to national sustainable development.In order to realize its goal and target impacts, the Programme has set three outcomes and supporting outputs for its 2016-2020 work programme: Contributions of REDD+ to the mitigation of climate change as well as to the provision of additional benefits have been designed. Country contributions to the mitigation of climate change though REDD+ are measured, reported and verified and necessary institutional arrangements are in place. REDD+ contributions to the mitigation of climate change are implemented and safeguarded with policies and measures that constitute results-based actions, including the development of appropriate and effective institutional arrangements.Additionally, the Programme has identified four important cross-cutting themes as being particularly significant in order to ensure that the outcomes and outputs of the Programme will achieve results as desired: Stakeholder Engagement, Forest Governance, Tenure Security and Gender Equality. Donors The UN-REDD Programme depends entirely on voluntary funds. Donors to the UN-REDD Programme have included the European Commission and governments of Denmark, Japan, Luxembourg, Norway, Spain and Switzerland—with Norway providing a significant portion of the funds. Transparency The UN-REDD Programme adheres to the belief that information is fundamental to the effective participation of all stakeholders, including the public, in the advancement of REDD+ efforts around the world. Information sharing promotes transparency and accountability and enables public participation in REDD+ activities.The collaborating UN agencies of the UN-REDD Programme – FAO, UNEP and UNDP – are committed to making information about the Programme and its operations available to the public in the interest of transparency. As part of this commitment, the Programme publishes annual and semi-annual programme progress reports Archived 12 August 2020 at the Wayback Machine and provides online public access to real-time funding administration. Difference between REDD+ and the UN-REDD Programme REDD+ is a voluntary climate change mitigation approach that has been developed by Parties to the UNFCCC. It aims to incentivize developing countries to reduce emissions from deforestation and forest degradation, conserve forest carbon stocks, sustainably manage forests and enhance forest carbon stocks. The United Nations Collaborative Programme on Reducing Emissions from Deforestation and Forest Degradation in Developing Countries – or UN-REDD Programme – is a multilateral body. It partners with developing countries to support them in establishing the technical capacities needed to implement REDD+ and meet UNFCCC requirements for REDD+ results-based payments. It does so through a country-based approach that provides advisory and technical support services tailored to national circumstances and needs. The UN-REDD Programme is a collaborative programme of the United Nations Food and Agriculture Organization (FAO), the United Nations Development Programme (UNDP) and the United Nations Environment Programme (UNEP), and harnesses the technical expertise of these UN agencies. Other examples of REDD+ multilaterals include the Forest Carbon Partnership Facility and Forest Investment Program, hosted by The World Bank. History Terminology The approach detailed under the UNFCCC is commonly referred to as "reducing emissions from deforestation and forest degradation", abbreviated as REDD+. This title and the acronyms, however, are not used by the COP itself. The original submission by Papua New Guinea and Costa Rica, on behalf of the Coalition for Rainforest Nations, dated 28 July 2005, was entitled "Reducing Emissions from Deforestation in Developing Countries: Approaches to Stimulate Action". COP 11 entered the request to consider the document as agenda item 6: "Reducing emissions from deforestation in developing countries: approaches to stimulate action", again written here exactly as in the official text. The name for the agenda item was also used at COP 13 in Bali, December 2007. By COP 15 in Copenhagen, December 2009, the scope of the agenda item was broadened to "Methodological guidance for activities relating to reducing emissions from deforestation and forest degradation and the role of conservation, sustainable management of forests and enhancement of forest carbon stocks in developing countries", moving to "Policy approaches and positive incentives on issues relating to reducing emissions from deforestation and forest degradation in developing countries; and the role of conservation, sustainable management of forests and enhancement of forest carbon stocks in developing countries" by COP 16. At COP 17 the title of the decision simply referred back to an earlier decision: "Guidance on systems for providing information on how safeguards are addressed and respected and modalities relating to forest reference emission levels and forest reference levels as referred to in decision 1/CP.16". At COP 19 the titles of decisions 9 and 12 refer back to decision 1/CP.16, paragraph 70 and appendix I respectively, while the other decisions only mention the topic under consideration.None of these decisions use an acronym for the title of the agenda item; the acronym is not coined by the COP of the UNFCCC. The set of decisions on REDD+ that were adopted at COP 19 in Warsaw, December 2013, was coined the Warsaw Framework on REDD-plus in a footnote to the title of each of the decisions creating the acronyms: REDD originally referred to "reducing emissions from deforestation in developing countries" the title of the original document on REDD. It was superseded in the negotiations by REDD+. REDD+ (or REDD-plus) refers to "reducing emissions from deforestation and forest degradation in developing countries, and the role of conservation, sustainable management of forests, and enhancement of forest carbon stocks in developing countries" (emphasis added); the most recent, elaborated terminology used by the COP.Most of the key REDD+ decisions were completed by 2013, with the final pieces of the rulebook finished in 2015. REDD REDD was first discussed in 2005 by the UNFCCC at its 11th session of the Conference of the Parties to the convention (COP) at the request of Costa Rica and Papua New Guinea, on behalf of the Coalition for Rainforest Nations, when they submitted the document "Reducing Emissions from Deforestation in Developing Countries: Approaches to Stimulate Action", with a request to create an agenda item to discuss consideration of reducing emissions from deforestation and forest degradation in natural forests as a mitigation measure. COP 11 entered the request to consider the document as agenda item 6: Reducing emissions from deforestation in developing countries: approaches to stimulate action.In December 2007, after a two-year debate on a proposal from Papua New Guinea and Costa Rica, state parties to the United Nations Framework Convention on Climate Change (UNFCCC) agreed to explore ways of reducing emissions from deforestation and to enhance forest carbon stocks in developing nations. The underlying idea is that developing nations should be financially compensated if they succeed in reducing their levels of deforestation (through valuing the carbon that is stored in forests); a concept termed 'avoided deforestation (AD) or, REDD if broadened to include reducing forest degradation.Under the free market model advocated by the countries who have formed the Coalition of Rainforest Nations, developing nations with rainforests would sell carbon sink credits under a free market system to Kyoto Protocol Annex I states who have exceeded their emissions allowance.: 434  Brazil (the state with the largest area of tropical rainforest) however, opposes including avoided deforestation in a carbon trading mechanism and instead favors creation of a multilateral development assistance fund created from donations by developed states.: 434  For REDD to be successful science and regulatory infrastructure related to forests will need to increase so nations may inventory all their forest carbon, show that they can control land use at the local level and prove that their emissions are declining. REDD+ Subsequent to the initial donor nation response, the UN established REDD Plus, or REDD+, expanding the original program's scope to include increasing forest cover through both reforestation and the planting of new forest cover, as well as promoting sustainable forest resource management. Bali Action Plan REDD received substantial attention from the UNFCCC – and the attending community – at COP 13, December 2007, where the first substantial decision on REDD+ was adopted, Decision 2/CP.13: "Reducing emissions from deforestation in developing countries: approaches to stimulate action", calling for demonstration activities to be reported upon two years later and assessment of drivers of deforestation. REDD+ was also referenced in decision 1/CP.13, the "Bali Action Plan", with reference to all five eligible activities for REDD+ (with sustainable management of forests, conservation of forest carbon stocks and enhancement of forest carbon stocks constituting the "+" in REDD+).The call for demonstration activities in decision 2/CP.13 led to a very large number of programs and projects, including the Forest Carbon Partnership Facility (FCPF) of the World Bank, the UN-REDD Programme, and a number of smaller projects financed by the Norwegian International Climate and Forest Initiative (NICFI), the United States, the United Kingdom, and Germany, among many others. All of these were based on substantive guidance from the UNFCCC. Definition of main elements In 2009 at COP 15, decision 4/CP.15: "Methodological guidance for activities relating to reducing emissions from deforestation and forest degradation and the role of conservation, sustainable management of forests and enhancement of forest carbon stocks in developing countries" provided more substantive information on requirements for REDD+. Specifically, the national forest monitoring system was introduced, with elements of measurement, reporting and verification (MRV). Countries were encouraged to develop national strategies, develop domestic capacity, establish reference levels, and establish a participatory approach with "full and effective engagement of indigenous peoples and local communities in (…) monitoring and reporting".A year later at COP 16 decision 1/CP.16 was adopted. In section C: "Policy approaches and positive incentives on issues relating to reducing emissions from deforestation and forest degradation in developing countries; and the role of conservation, sustainable management of forests and enhancement of forest carbon stocks in developing countries" environmental and social safeguards were introduced, with a reiteration of requirements for the national forest monitoring system. These safeguards were introduced to ensure that implementation of REDD+ at the national level would not lead to detrimental effects for the environment or the local population. Countries are required to provide summaries of information on how these safeguards are implemented throughout the three "phases" of REDD+. In 2011 decision 12/CP.17 was adopted at COP 17: "Guidance on systems for providing information on how safeguards are addressed and respected and modalities relating to forest reference emission levels and forest reference levels as referred to in decision 1/CP.16". Details are provided on preparation and submission of reference levels and guidance on providing information on safeguards. Warsaw Framework on REDD-plus In December 2013, COP 19 produced no fewer than seven decisions on REDD+, which are jointly known as the "Warsaw Framework on REDD-plus". These decisions address a work program on results-based finance; coordination of support for implementation; modalities for national forest monitoring systems; presenting information on safeguards; technical assessment of reference (emission) levels; modalities for measuring, reporting and verifying (MRV); and information on addressing the drivers of deforestation and forest degradation. Requirements to be eligible access to "results-based finance" have been specified: through submission of reports for which the contents have been specified; technical assessment through International Consultation and Analysis (ICA) for which procedures have been specified. With these decisions the overall framework for REDD+ implementation was completed, although many details still needed to be provided. COP 20 in December 2014 did not produce any new decisions on REDD+. A reference was made to REDD+ in decision 8/CP.20 "Report of the Green Climate Fund to the Conference of the Parties and guidance to the Green Climate Fund", where in paragraph 18 the COP "requests the Board of the Green Climate Fund (...) (b) to consider decisions relevant to REDD-plus", referring back to earlier COP decisions on REDD+.The remaining outstanding decisions on REDD+ were completed at COP 21 in 2015. With the conclusion of decisions on reporting on the safeguards, non-market approaches, and non-carbon benefits, the UNFCCC rulebook on REDD+ was completed. All countries were also encouraged to implement and support REDD+ in Article 5 of the Paris Agreement. This was part of a broader Article that specified that all countries should take action to protect and enhance their greenhouse gas sinks and reservoirs (stores of sequestered carbon). See also Deforestation and climate change Deforestation by region Emissions trading Illegal logging CDM excluding Forest Conservation Natural Forest Standard Tree credits Tree planting United Nations Forum on Forests References Further reading Goetz, S.; Hansen, M.; Houghton, R.; Walker, W.; Laporte, N.; Busch, J. (2015). "Measurement and monitoring needs, capabilities and potential for addressing reduced emissions from deforestation and forest degradation under REDD+". Environmental Research Letters. 10 (12): 123001. Bibcode:2015ERL....10l3001G. doi:10.1088/1748-9326/10/12/123001. Lu, H.; Wang, X.; Zhang, Y.; Yan, W.; Zhang, J. (2012). "Modelling Forest Fragmentation and Carbon Emissions for REDD plus". Procedia Engineering. 37: 333–338. doi:10.1016/j.proeng.2012.04.249. Probert, C.; Sharrock, S.; Ali, N. (2011). A REDD+ manual for botanic gardens (PDF). Botanic Gardens Conservation International (BGCI). Pan, Y.; Birdsey, R.; Fang, J.; Hougton, R.; Kauppi, P.; Kurz, W.; Phillips, O.; Shvidenko, A.; Canadell, J.; Ciais, P.; Jackson, R.; Lewis, S.; McGuire, D.; Pacala, S.; Piao, S.; Rautiainen, A.; Sitch, S.; Hayes, D. (2011). "A large and persistent carbon sink in the world's forests". Science. 333 (6045): 988–993. Bibcode:2011Sci...333..988P. doi:10.1126/science.1201609. PMID 21764754. S2CID 42458151. Entenmann, Steffen Karl; Schmitt, Christine Brigitte (2013). "'Actors' perceptions of forest biodiversity values and policy issues related to REDD plus implementation in Peru". Biodiversity and Conservation. 22 (5): 1229–p1254. doi:10.1007/s10531-013-0477-5. S2CID 16140886. : 26  Asiyanbi, A.; Lund, J. F. (2020). "Policy persistence: REDD+ between stabilization and contestation". Journal of Political Ecology. 27 (1): 378–400. External links Official UN-REDD Programme Website Official UN-REDD Programme Online Collaborative Workspace Archived 12 October 2022 at the Wayback Machine Official UNFCCC Website UN-REDD Programme Multi-Partner Trust Fund Factsheet UNFCCC REDD Web Platform REDD+ Partnership, including financing database Forest Carbon Partnership Facility, hosted by the World Bank REDD+ profile on database of Market Governance Mechanisms UN-REDD Programme Code REDD: A campaign to promote REDD+ projects and the corporations who have pledged support REDD-Monitor - Critical analysis and news about REDD Archived 14 December 2022 at the Wayback MachinePartners: Forest Carbon Partnership Facility (FCPF) Global Environment Facility (GEF) Forest Investment Program (FIP) Archived 9 April 2017 at the Wayback Machine
fugitive gas emissions
Fugitive gas emissions are emissions of gas (typically natural gas, which contains methane) to atmosphere or groundwater which result from oil and gas or coal mining activity. In 2016, these emissions, when converted to their equivalent impact of carbon dioxide, accounted for 5.8% of all global greenhouse gas emissions.Most fugitive emissions are the result of loss of well integrity through poorly sealed well casings due to geochemically unstable cement. This allows gas to escape through the well itself (known as surface casing vent flow) or via lateral migration along adjacent geological formations (known as gas migration). Approximately 1-3% of methane leakage cases in unconventional oil and gas wells are caused by imperfect seals and deteriorating cement in wellbores. Some leaks are also the result of leaks in equipment, intentional pressure release practices, or accidental releases during normal transportation, storage, and distribution activities.Emissions can be measured using either ground-based or airborne techniques. In Canada, the oil and gas industry is thought to be the largest source of greenhouse gas and methane emissions, and approximately 40% of Canada's emissions originate from Alberta. Emissions are largely self-reported by companies. The Alberta Energy Regulator keeps a database on wells releasing fugitive gas emissions in Alberta, and the British Columbia Oil and Gas Commission keeps a database of leaky wells in British Columbia. Testing wells at the time of drilling was not required in British Columbia until 2010, and since then 19% of new wells have reported leakage problems. This number may be a low estimate, as suggested by fieldwork completed by the David Suzuki Foundation. Some studies have shown a range of 6-30% of wells suffer gas leakage.Canada and Alberta have plans for policies to reduce emissions, which may help combat climate change. Costs related to reducing emissions are very location-dependent and can vary widely. Methane has a greater global warming impact than carbon dioxide, as its radiative force is 120, 86 and 34 times that of carbon dioxide, when considering a 1, 20 and 100 year time frame (including Climate Carbon Feedback Additionally, it leads to increases in carbon dioxide concentration through its oxidation by water vapor. Sources of emissions Fugitive gas emissions can arise as a result of operations in hydrocarbon exploration, such as for natural gas or petroleum. Often, sources of methane are also sources of ethane, allowing methane emissions to be derived based on ethane emissions and ethane/methane ratios in the atmosphere. This method has given an estimate of increased methane emission from 20 Tg per year in 2008 to 35 Tg per year in 2014. A large portion of methane emissions can be contributed by only a few "super-emitters". The annual ethane emission increase rate in North America between 2009 and 2014 was 3-5%. It has been suggested that 62% of atmospheric ethane originates from leaks associated with natural gas production and transportation operations. It has also been suggested that ethane emissions measured in Europe are affected by hydraulic fracturing and shale gas production operations in North America. Some researchers postulate that leakage problems are more likely to happen in unconventional wells, which are hydraulically fractured, than in conventional wells.Approximately 40% of methane emissions in Canada occur within Alberta, according to the National Inventory Report. Of the anthropogenic methane emissions in Alberta, 71% are generated by the oil and gas sector. It is estimated that 5% of the wells in Alberta are associated with natural gas leaking or venting. It is also estimated that 11% of all wells drilled in British Columbia, or 2739 wells out of 24599, have reported leakage problems. Some studies have estimated that 6-30% of all wells suffer gas leakage. Well-specific and processing sources Sources can include broken or leaky well casings (either at abandoned wells or unused, but not properly abandoned, wells) or lateral migration through the geological formations in the subsurface before being emitted to groundwater or atmosphere. Broken or leaky well casings are often the result of geochemically unstable or brittle cement. One researcher proposes 7 main paths for gas migration and surface casing vent flow: (1) between the cement and adjacent rock formation, (2) between the casing and encompassing cement, (3) between the casing and the cement plug, (4) directly through the cement plug, (5) through the cement between casing and adjacent rock formation, (6) through the cement between linking cavities from the casing side of the cement to the annulus side of the cement, and (7) through shears in the casing or well bore.Leakage and migration can be caused by hydraulic fracturing, although in many cases the method of fracturing is such that gas is not able to migrate through the well casing. Some studies observe that hydraulic fracturing of horizontal wells does not affect the likelihood of the well suffering from gas migration. It is estimated that approximately 0.6-7.7% of methane emissions produced during the lifetime of a fossil fuel well occur during activities that take place either at the well site or during processing. Pipeline and distribution sources Distribution of hydrocarbon products can lead to fugitive emissions caused by leaks in seals of pipes or storage containers, improper storage practices, or transportation accidents. Some leaks may be intentional, in the case of pressure release safety valves. Some emissions may originate from unintentional equipment leaks, such as from flanges or valves. It is estimated that approximately 0.07-10% of methane emissions occur during transportation, storage, and distribution activities. Detection methods There are several methods used to detect fugitive gas emissions. Often, measurements are taken at or near the wellheads (via the use of soil gas samples, eddy covariance towers, dynamic flux chambers connected to a greenhouse gas analyzer), but it is also possible to measure emissions using an aircraft with specialized instruments on board. An aircraft survey in northeastern British Columbia indicated emissions emanating from approximately 47% of active wells in the area. The same study suggests that actual methane emissions may be much higher than what is being reported by industry or estimated by government. For small-scale measurement projects, infrared camera leak inspections, well injection tracers, and soil gas sampling may be used. These are typically too labour-intensive to be useful to large oil and gas companies, and often airborne surveys are used instead. Other source identification methods used by industry include carbon isotope analysis of gas samples, noise logs of the production casing, and neutron logs of the cased borehole. Atmospheric measurements through both airborne or ground-based sampling are often limited in sample density due to spatial constraints or sampling duration limitations.One way of attributing methane to a particular source is taking continuous measurements of the stable carbon isotopic measurements of atmospheric methane (δ13CH4) in the plume of anthropogenic methane sources using a mobile analytical system. Since different types and maturity levels of natural gas have different δ13CH4 signatures, these measurements can be used to determine the origin of methane emissions. Activities related to natural gas emit methane plumes with a range of -41.7 to -49.7 ± 0.7‰ of δ13CH4 signatures.High rates of methane emissions measured in the atmosphere at a regional scale, often through airborne measurements, may not represent typical leakage rates from natural gas systems. Reporting and regulating emissions Policies regulating reporting of fugitive gas emissions vary, and there is often an emphasis on self-reporting by companies. A necessary condition to successfully regulate greenhouse gas (GHG) emissions is the capacity to monitor and quantify the emissions before and after the regulations are in place.Since 1993, there have been voluntary actions by the oil and gas industry in the United States to adopt new technologies that reduce methane emissions, as well as the commitment to employ best management practices to achieve methane reductions at the sector level. In Alberta, the Alberta Energy Regulator maintains a database of self-reported instances of gas migration and surface casing vent flows at wells in the province.Reporting of leakage in British Columbia did not start until 1995, when it was required to test wells for leakage upon abandonment. Testing upon drilling of the well was not required in British Columbia until 2010. Among the 4017 wells drilled since 2010 in British Columbia, 19%, or 761 wells, have reported leakage problems. Fieldwork conducted by the David Suzuki Foundation, however, has discovered leaky wells that were not included in the British Columbia Oil and Gas Commission's (BCOGC) database, meaning that the number of leaky wells could be higher than reported. According to the BCOGC, surface casing vent flow is the major cause of leakage in wells at 90.2%, followed by gas migration at 7.1%. Based on the methane leakage rate of the reported 1493 wells that are currently leaking in British Columbia, a total leakage rate of 7070 m3 daily (2.5 million m3 yearly) is estimated, although this number may be underestimated as demonstrated by the fieldwork done by the David Suzuki Foundation.Bottom-up inventories of leakage involve determining average leakage rates for various emission sources such as equipment, wells, or pipes, and extrapolating this to the leakage that is estimated to be the total contribution by a given company. These methods usually underestimate methane emission rates, regardless of the scale of the inventory. Addressing issues stemming from fugitive gas emissions There are some solutions for addressing these issues. Most of them require policy implementation or changes at the company, regulator, or government levels (or all three). Policies can include emission caps, feed-in-tariff programs, and market-based solutions such as taxes or tradeable permits.Canada has enacted policies which include plans to reduce emissions from the oil and gas sector by 40 to 45% below 2012 levels by 2025. The Alberta government also has plans to reduce methane emissions from oil and gas operations by 45% by 2025.Reducing fugitive gas emissions could help slow climate change, since methane has a radiative force 25 times that of carbon dioxide when considering a 100 year time frame. Once emitted, methane is also oxidized by water vapour and increases carbon dioxide concentration, leading to further climate effects. Costs of reducing fugitive gas emissions Costs related to implementation of policies designed to reduce fugitive gas emissions vary greatly depending on the geography, geology, and hydrology of the production and distribution areas. Often, the cost of reducing fugitive gas emissions falls to individual companies in the form of technology upgrades. This means that there is often a discrepancy between companies of different sizes as to how drastically they can financially afford to reduce their methane emissions. Addressing and remediating fugitive gas emissions The process of intervention in the case of leaky wells affected by surface casing vent flows and gas migrations can involve perforating the intervention area, pumping fresh water and then slurry into the well, and remedial cementing of the intervention interval using methods such as bradenhead squeeze, cement squeeze, or circulation squeeze. See also Gas leak Gas venting Orphan wells in Alberta, Canada References Works cited IPCC AR5 WG1 (2013), Stocker, T.F.; et al. (eds.), Climate Change 2013: The Physical Science Basis. Working Group 1 (WG1) Contribution to the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5), Cambridge University Press. Climate Change 2013 Working Group 1 website.
bhp
BHP Group Limited (formerly known as BHP Billiton) is an Australian multinational mining and metals public company headquartered in Melbourne, Victoria, Australia. The Broken Hill Proprietary Company was founded on 16 July 1885 in the mining town of Silverton, New South Wales. By 2017, BHP was the world's largest mining company, based on market capitalisation, and was Melbourne's third-largest company by revenue.BHP Billiton was formed in 2001 through the merger of the Australian Broken Hill Proprietary Company Limited (BHP) and the Anglo–Dutch Billiton plc trading on both the Australian and London Stock Exchanges as a dual-listed company. In 2015, some BHP Billiton assets were demerged and rebranded as South32, while a scaled-down BHP Billiton became BHP. In 2018, BHP Billiton Limited and BHP Billiton plc became BHP Group Limited and BHP Group plc, respectively. In the 2020 Forbes Global 2000, BHP Group was ranked as the 93rd-largest public company in the world. In January 2022, BHP relinquished its London Stock Exchange listing, becoming a solely Australian Securities Exchange-listed company. As of 2022, BHP is the largest company in Australia, and the largest mining company in the world, both as measured by market capitalisation. History Billiton Billiton Maatschappij was founded 29 September 1860, when its articles of association were approved by a meeting of shareholders in the Groot Keizershof hotel in The Hague, Netherlands. Two months later, the company acquired mineral rights to the Billiton (Belitung) and Bangka Islands in the Netherlands Indies archipelago off the eastern coast of Sumatra.Billiton's initial ventures included tin and lead smelting in the Netherlands, followed in the 1940s by bauxite mining in Indonesia and Suriname. In 1970, Shell acquired Billiton. Billiton opened a tin smelting and refining plant in Phuket, Thailand, named Thaisarco (for Thailand Smelting And Refining Company, Limited).In 1994, South Africa's Gencor acquired the mining division of Billiton excluding the downstream metal division. Billiton was divested from Gencor in 1997, and was amalgamated with Gold Fields in 1998. In 1997, Billiton plc became a constituent of the FTSE 100 Index and in 2001 Billiton plc merged with the Broken Hill Proprietary Company Limited (BHP) to form BHP Billiton. Broken Hill Proprietary Company The Broken Hill Proprietary Company Limited (BHP), also known by the nickname "the Big Australian", was incorporated on 13 August 1885, operating the silver and lead mine at Broken Hill, in western New South Wales, Australia. The Broken Hill group floated on 10 August 1885. The first consignment of Broken Hill ore (48 tons, 5 cwt, 3grs) was smelted at the Intercolonial Smelting and Refining Company's works at Spotswood, Victoria, a suburb of Melbourne. Historian Christopher Jay notes: The resulting 35,605 ounces of silver raised a lot of interest when exhibited at the City of Melbourne Bank in Collins Street. Some sceptics asserted the promoters were merely using silver from somewhere else, to ramp up the shares.... Another shareholder, the dominating W. R. Wilson had had to lend William Jamieson, General Manager, a new suit so he could take the first prospectus, printed at Silverton near Broken Hill on 20 June 1885, to Adelaide to start the float process. The geographic Broken Hill, for which the town was named, was discovered and named by Captain Charles Sturt, stirring great interest among prospectors. Nothing of note was discovered until 5 November 1883, when Charles Rasp, boundary rider for the surrounding Mount Gipps Station, pegged out a 40-acre claim with contractors David James and James Poole.Together with a half-dozen backers, including station manager George McCulloch (a young cousin of Victorian Premier Sir James McCulloch), Rasp formed the Broken Hill Company staking out the entire Hill. As costs mounted during the ensuing months of fruitless search, three of the original seven (now remembered as the Syndicate of Seven) sold their shares, so that, on the eve of the company's great success, there were nine shareholders, including Rasp, McCulloch, Philip Charly (aka Charley), David James, James Poole (five of the original syndicate of seven, which had previously included George Urquhart and G.A.M. Lind), Bowes Kelly, W. R. Wilson, and William Jamieson (who had bought shares from several of the founders).John Darling, Jr. became a director of the company in 1892 and was chairman of directors from 1907 to 1914.Strongly encouraged by the New South Wales Minister for Public Works, Arthur Hill Griffith, in 1915, the company ventured into steel manufacturing, with its operations based primarily at the Newcastle Steelworks. The decision to move from mining ore at Broken Hill to opening a steelworks at Newcastle was due to the technical limitations in recovering value from mining the lower-lying sulphide ores. The discovery of Iron Knob and Iron Monarch near the western shore of the Spencer Gulf in South Australia, combined with the refinement, by BHP metallurgists A. D. Carmichael and Leslie Bradford, of the froth flotation technique for separating zinc sulphides from the accompanying gangue and subsequent conversion (Carmichael–Bradford process) to oxides of the metal, allowed BHP to economically extract valuable metals from the heaps of tailings up to 40 ft (12 m) high at the mine site. In 1942, the Imperial Japanese Navy targeted the BHP steelworks during the largely unsuccessful shelling of Newcastle.Newcastle operations were closed in 1999, and a 70-ton commemorative sculpture, The Muster Point, was installed on Industrial Drive, in the suburb of Mayfield, New South Wales. The long products side of the steel business was spun off to form OneSteel in 2000.In the 1950s, BHP began petroleum exploration, which became an increasing focus following oil and natural gas discoveries in Bass Strait in the 1960s.BHP began to diversify into a variety of mining projects overseas. Those included the Ok Tedi copper mine in Papua New Guinea, where the company was successfully sued by the indigenous inhabitants because of the environmental degradation caused by mining operations. BHP had better success with the giant Escondida copper mine in Chile, of which it owns 57.5%, and at the Ekati Diamond Mine in northern Canada, which BHP contracted for in 1996, began mining in 1998, and sold its 80% stake in to Dominion Diamond Corporation in 2013 as production declined. BHP Billiton In 2001, BHP merged with the Billiton mining company to form BHP Billiton. In 2002, flat steel products were demerged to form the publicly traded company BHP Steel which, in 2003, became BlueScope.In March 2005, BHP Billiton announced a US$7.3 billion agreed bid for WMC Resources, owners of the Olympic Dam copper, gold and uranium mine in South Australia, nickel operations in Western Australia and Queensland, and a Queensland fertiliser plant. The takeover achieved 90 per cent acceptance on 17 June 2005, and 100 per cent ownership was announced on 2 August 2005, achieved through compulsory acquisition of the remaining 10 percent of the shares.On 8 November 2007, BHP Billiton announced it was seeking to purchase rival mining group Rio Tinto Group in an all-share deal. The initial offer of 3.4 shares of BHP Billiton stock for each share of Rio Tinto was rejected by the board of Rio Tinto for "significantly undervaluing" the company. It was unknown at the time whether BHP Billiton would attempt to purchase Rio Tinto through some form of hostile takeover. A formal hostile bid of 3.4 BHP Billiton shares for each Rio Tinto share was announced on 6 February 2008; The bid was withdrawn 25 November 2008 due to global recession.On 14 May 2008, BHP Billiton shares rose to a record high of A$48.90 following speculation that Chinese mining firm Chinalco was considering purchasing a large stake.As global nickel prices fell, on 25 November 2008, Billiton announced that it would drop its A$66 billion takeover of rival Rio Tinto Group, stating that the "risks to shareholder value" would "increase" to "an unacceptable level" due to the global financial crisis.On 21 January 2009, BHP Billiton then announced that Ravensthorpe Nickel Mine in Western Australia would cease operations, ending shipments of ore from Ravensthorpe to the Yabulu nickel plant in Queensland Australia. Yabulu refinery was subsequently sold to Queensland billionaire Clive Palmer, becoming the Palmer Nickel and Cobalt Refinery. Pinto Valley mine in the United States was also closed. Mine closures and general scaling back during the global financial crisis accounted for 6,000 employee lay offs.As the nickel market became saturated by both spiraling economics and cheaper extraction methods; on 9 December 2009, BHP Billiton sold its Ravensthorpe Nickel Mine, which had cost A$2.4 billion to build, to Vancouver-based First Quantum Minerals for US$340 million. First Quantum, a Canadian company, was one of three bidders for the mine, tendering the lowest offer, and returned the mine to production in 2011. Ravensthorpe cost BHP US$3.6 billion in write-downs when it was shut in January 2009 after less than a year of production.In January 2010, following the BHP Billiton purchase of Athabasca Potash for US$320m, The Economist reported that, by 2020, BHP Billiton could produce approximately 15 per cent of the world demand for potash.In August 2010, BHP Billiton made a hostile takeover bid worth US$40 billion for PotashCorp. The bid came after BHP's first bid, made on 17 August, was rejected as being undervalued. This acquisition marked a major strategic move by BHP outside hard commodities and commenced the diversification of its business away from resources with high exposure to carbon price risk, like coal, petroleum and iron ore. The takeover bid was opposed by the Government of Saskatchewan under Premier Brad Wall. On 3 November, Canadian Industry Minister Tony Clement announced the preliminary rejection of the deal under the Investment Canada Act, giving BHP Billiton 30 days to refine their deal before a final decision was made; BHP withdrew its offer on 14 November 2010.On 22 February 2011, BHP Billiton announced that it had paid $4.75 billion in cash to Chesapeake Energy for its Fayetteville shale assets, which include 487,000 acres (1,970 km2) of mineral rights leases and 420 miles (680 km) of pipeline located in north central Arkansas. The wells on the mineral leases are currently producing about 415 million cubic feet of natural gas per day. BHP Billiton planned to spend $800 million to $1 billion a year over 10 years to develop the field and triple production.On 14 July 2011, BHP Billiton announced that it would acquire Petrohawk Energy of the United States for approximately $12.1 billion in cash, considerably expanding its shale natural gas resources in an offer of $US38.75 per share. On 22 August 2012, BHP Billiton announced that it was delaying its US$20 billion (£12 billion) Olympic Dam copper mine expansion project in South Australia to study less capital intensive options, deferring its dual harbour strategy at West Australian Iron Ore and slowing down its Potash growth option in Canada. The company simultaneously announced a freeze on approving any major new expansion projects.Days after announcing the Olympic Dam pull-out, BHP Billiton announced that it was selling its Yeelirrie Uranium Project to Canadian Cameco for a fee of around $430 million. The sale was part of a broader move to step away from resource expansion in Australia.On 19 August 2014, BHP Billiton announced it would create an independent global metals and mining company based on a selection of its aluminium, coal, manganese, nickel, and silver assets. The newly formed entity, named South32, was subsequently demerged with listings on the Australian Securities Exchange the JSE and the London Stock Exchange.In 2015, BHP Billiton spun off a number of its subsidiaries in South Africa and Southern Africa to form a new company known as South32.BHP Billiton agreed to pay a fine of $25 million to the United States Securities and Exchange Commission in 2015 in connection with violations of the Foreign Corrupt Practices Act related to its "hospitality program" at the 2008 Summer Olympics in Beijing. BHP Billiton invited 176 government and state-owned-enterprise officials to attend the Games on an all-expenses-paid package. While BHP Billiton claimed to have compliance processes in place to avoid conflicts of interest, the SEC found that BHP Billiton had invited officials from at least four countries where BHP Billiton had interests in influencing the officials' decisions (Congo, Guinea, Philippines and Burundi).In August 2016, BHP Billiton recorded its worst annual loss in history, $6.4 billion.Towards the end of 2016 BHP Billiton indicated it would expand its petroleum business and make new investments in the sector.In February 2017, BHP Billiton announced a $2.2 billion investment in the new BP platform in the Gulf of Mexico. During the same year, as part of their plan to increase productivity at the Escondida mine in Chile, which is the world's biggest copper mine, BHP Billiton attempted to get workers to accept a 4-year pay freeze, a 66% reduction in the end-of-conflict bonus offering, and increased shift flexibility. This resulted in a major workers' strike and forced the company to declare force majeure on two shipments, which drove copper prices up by 4%.In April 2017 activist hedge fund manager Elliott Advisors proposed a plan for BHP Billiton to spin off its American petroleum assets and significantly restructure the business, including the scrapping of its dual Sydney-London listing, suggesting shares be offered only in the United Kingdom, while leaving its headquarters and tax residences in Australia where shares would trade as depository instruments. At the time of the correspondence Elliott held about 4.1 per cent of the issued shares in London-listed BHP Billiton plc, worth $3.81 billion. Australia's government warned it would block moves to shift BHP Billiton's stock listing from Australia to the United Kingdom. Australian Treasurer Scott Morrison said the move would be contrary to the country's national interest and would breach government orders mandating a listing on the Australian Securities Exchange. BHP Billiton dismissed the plan saying the costs and risks of Elliott's proposal outweighed any potential benefits. BHP In May 2017, with much of the former Billiton assets having been disposed of, BHP Billiton began to rebrand itself as BHP, at first in Australia and then globally. It replaced the slogan "The Big Australian" with "Think Big", with an advertising campaign rolling out in mid May 2017. Work on the change began in late 2015 according to BHP's chief external affairs officer.In August 2017, BHP announced that it would sell off its US shale oil and gas business. In July 2018, the company agreed to sell its shale assets to BP for $10.5 billion. BHP indicated its intention to return funds to investors. On 29 September 2018, BHP completed the sale of its Fayetteville Onshore US gas assets to a wholly owned subsidiary of Merit Energy Company.In August 2021, BHP announced plans to exit the oil and gas industry by merging its hydrocarbon business with Woodside Energy, Australia's largest independent gas producer. It also announced its intention to delist from the London Stock Exchange and consolidate on the Australian Securities Exchange. This occurred in January 2022. In April 2023, BHP took over Oz Minerals in a $9.6 billion deal. Corporation Until January 2022, BHP was a dual-listed company; the Australian BHP Billiton Limited and the British BHP Billiton plc were separately listed with separate shareholder bodies, while conducting business as one operation with identical boards of directors and a single management structure. The headquarters of BHP Billiton Limited and the global headquarters of the combined group were located in Melbourne, Australia. The headquarters of BHP Billiton plc were located in London, England. Its main office locations were in Australia, the U.S., Canada, the UK, Chile, Malaysia, and Singapore.The company's shares traded on the following exchanges: BHP Billiton Limited and BHP Billiton plc were renamed BHP Group Limited and BHP Group plc, respectively, on 19 November 2018. Senior management In 1998, BHP hired American Paul Anderson to restructure the company. Anderson successfully completed the four-year project with a merger between BHP and London-based Billiton. In July 2002, Brian Gilbertson of Billiton was appointed CEO, but resigned after just six months, citing irreconcilable differences with the board.Upon Gilbertson's departure in early 2003, Chip Goodyear was appointed the new CEO, increasing sales by 47 percent and profits by 78 percent during his tenure. Goodyear retired on 30 September 2007. Marius Kloppers was Goodyear's successor. Following Kloppers' tenure, Andrew Mackenzie, chief executive of Non-Ferrous, assumed the role of CEO in 2013. Australia mining head Mike Henry succeeded Mackenzie on 1 January 2020. Operations BHP has mining operations in Australia, North America, and South America, and petroleum operations in the U.S., Australia, Trinidad and Tobago, the UK, and Algeria.The company has four primary operational units: Coal Copper Iron ore PetroleumThe company's mines are as follows: BHP Foundation The BHP Foundation is a philanthropic organisation funded by BHP, which as of October 2023 was funding 38 projects in 65 countries. Its Australian programs are focused on Indigenous Australian self-determination, and young people. One of its partner organisations is Reconciliation Australia. Controversies Responsibility for climate damage BHP is listed as one of the 90 fossil fuel extraction and marketing companies responsible for two-thirds of global greenhouse gas emissions since the beginning of the industrial age. Its cumulative emissions as of 2010 have been estimated at 7,606 MtCO2e, representing 0.52% of global industrial emissions between 1751 and 2010, ranking it the 19th-largest corporate polluter. According to BHP management 10% of these emissions are from direct operations, while 90% come from products sold by the company. BHP has been voluntarily reporting its direct GHG emissions since 1996. In 2013, it was criticised for lobbying against carbon pricing in Australia.BHP reported total CO2e emissions (Direct + Indirect) for the twelve months ending 30 June 2020 at 15,800 Kt. Ok Tedi environmental disaster The Ok Tedi environmental disaster caused severe harm to the environment along the Ok Tedi River and the Fly River in the Western Province of Papua New Guinea between around 1984 and 2013. In 1999, BHP reported that 90 million tons of mine waste was annually discharged into the river for more than ten years and destroyed downstream villages, agriculture and fisheries. Mine wastes were deposited along 1,000 km (620 mi) of the Ok Tedi and the Fly River below its confluence with the Ok Tedi, and over an area of 100 km2 (39 sq mi). BHP's CEO, Paul Anderson, said that the Ok Tedi Mine was "not compatible with our environmental values and the company should never have become involved." As of 2006, mine operators continued to discharge 80 million tons of tailings, overburden, and mine-induced erosion into the river system each year. About 1,588 km2 (613 sq mi) of forest has died or is under stress. As many as 3,000 km2 (1,200 sq mi) may eventually be harmed.In the 1990s the communities of the lower Fly Region, including the Yonggom people, sued BHP and received US$28.6 million in an out-of-court settlement, which was the culmination of an enormous public-relations campaign against the company by environmental groups. As part of the settlement, a (limited) dredging operation was put in place and efforts were made to rehabilitate the site around the mine. However, the mine is still in operation and waste continues to flow into the river system. BHP was granted legal indemnity from future mine related damages. Experts predict that it will take 300 years to clean up the toxic contamination. Bento Rodrigues dam collapse In 2015, the company was involved in the Bento Rodrigues tailings dam collapse, the worst environmental disaster in the history of the state of Minas Gerais, Brazil. On 5 November 2015, an iron ore mine tailings dam near Mariana, south-eastern Brazil, owned and operated by Samarco, a subsidiary of BHP and Vale, suffered a catastrophic failure, devastating the nearby town of Bento Rodrigues with the mudflow, killing 19 people, injuring more than 50 and causing enormous ecological damage, and threatening life along the Rio Doce and the Atlantic Ocean near the mouth of the Rio Doce. The accident was one of the biggest environmental disasters in Brazil's history.An investigation into the disaster commissioned by BHP, Vale and Samarco found the collapse was due to a variety of construction and design flaws. In June 2018, Samarco, Vale and BHP signed an agreement for the Brazilian government to drop a $7 billion civil lawsuit against the mining companies and allow two years for the companies to address the greater US$55 billion civil lawsuit brought by Brazil's federal prosecutors seeking social, environmental and economic compensation. Escondida and Cerro Colorado water usage issue BHP has been accused of perpetrating irregularities with respect to drawing waters above the granted limit from the aquafiers. Due to which the water table has dropped significantly, making land-based livelihoods less viable for people of the community, many of whom have been forced to relocate to urban areas. In January 2021, the Supreme Court of Chile validated the objections of local indigenous tribes about BHP's water usage and impacts on wetland areas. Later in July, the same court ordered BHP to begin the application process for Cerro Colorado operating permits from scratch. Sexual harassment From 2019 to 2021, BHP registered six cases of sexual assault and seventy three cases of sexual harassment. A survey of 425 workers conducted by The Western Mine Workers' Alliance, showed that two-thirds of female respondents had experienced verbal sexual harassment while working in the FIFO mining industry, with 36% of women and 10% of men having experienced some form of harassment in the last 12 months. In response, BHP terminated or otherwise permanently removed forty eight workers from its sites. Other significant accidents Bad weather caused a BHP Billiton helicopter to crash in Angola on 16 November 2007, killing the helicopter's five passengers. The deceased were: BHP Billiton Angola Chief Operating Officer David Hopgood (Australian), Angola Technical Services Operations Manager Kevin Ayre (British), Wild Dog Helicopters pilot Kottie Breedt (South African), Guy Sommerfield (British) of MMC and Louwrens Prinsloo (Namibian) of Prinsloo Drilling. The helicopter crashed approximately 80 kilometres (50 mi) from the Alto Cuilo diamond exploration camp in Lunda Norte, northeastern Angola. BHP Billiton responded by suspending operations in the country. See also Mining in Australia References External links Official website Business data for BHP Group Ltd.: Documents and clippings about BHP in the 20th Century Press Archives of the ZBW
corn ethanol
Corn ethanol is ethanol produced from corn biomass and is the main source of ethanol fuel in the United States, mandated to be blended with gasoline in the Renewable Fuel Standard. Corn ethanol is produced by ethanol fermentation and distillation. It is debatable whether the production and use of corn ethanol results in lower greenhouse gas emissions than gasoline. Approximately 45% of U.S. corn croplands are used for ethanol production. Uses Since 2001, corn ethanol production has increased by more than several times. Out of 9.50 billions of bushels of corn produced in 2001, 0.71 billions of bushels were used to produce corn ethanol. Compared to 2018, out of 14.62 billions of bushels of corn produced, 5.60 billion bushels were used to produce corn ethanol, reported by the United States Department of Energy. Overall, 94% of ethanol in the United States is produced from corn.Currently, corn ethanol is mainly used in blends with gasoline to create mixtures such as E10, E15, and E85. Ethanol is mixed into more than 98% of United States gasoline to reduce air pollution. Corn ethanol is used as an oxygenate when mixed with gasoline. E10 and E15 can be used in all engines without modification. However, blends like E85, with a much greater ethanol content, require significant modifications to be made before an engine can run on the mixture without damaging the engine. Some vehicles that currently use E85 fuel, also called flex fuel, include, the Ford Focus, Dodge Durango, and Toyota Tundra, among others.The future use of corn ethanol as a main gasoline replacement is unknown. Corn ethanol has yet to be proven to be as cost effective as gasoline due to corn ethanol being much more expensive to create compared to gasoline. Corn ethanol has to go through an extensive milling process before it can be used as a fuel source. One major drawback with corn ethanol, is the energy returned on energy invested (EROI), meaning the energy outputted in comparison to the energy required to output that energy. Compared to oil, with an 11:1 EROI, corn ethanol has a much lower EROI of 1.5:1, which, in turn, also provides less mileage per gallon compared to gasoline. In the future, as technology advances and oil becomes less abundant, the process of milling may require less energy, resulting in an EROI closer to that of oil. Another serious problem with corn ethanol as a replacement for gasoline, is the engine damage on standard vehicles. E10 contains ten percent ethanol and is acceptable for most vehicles on the road today, while E15 contains fifteen percent ethanol and is usually prohibited for cars built before 2001. However, with the hope to replace gasoline in the future, E85, which contains 85% ethanol, requires engine modification before an engine can last while processing a high volume of ethanol for an extended period of time. Therefore, most older and modern day vehicles would become obsolete without proper engine modifications to handle the increase in corrosiveness from the high volume of ethanol. Also, most gas stations do not offer refueling of E85 vehicles. The United States Department of Energy reports that only 3,355 gas stations, out of 168,000, across the United States, offer ethanol refueling for E85 vehicles. Production process There are two main types of corn ethanol production: dry milling and wet milling, which differ in the initial grain treatment method and co-products. Dry milling The vast majority (≈80%) of corn ethanol in the United States is produced by dry milling. In the dry milling process, the entire corn kernel is ground into flour, or "mash," which is then slurried by adding water. Enzymes are added to the mash to hydrolyze the starch into simple sugars. Ammonia is added to control the pH and as a nutrient for the yeast, which is added later. The mixture is processed at high-temperatures to reduce the bacteria levels. The mash is transferred and cooled in fermenters. Yeast are added, which ferment the sugars into ethanol and carbon dioxide. The entire process takes 40 to 50 hours, during which time the mash is kept cool and agitated to promote yeast activity. The mash is then transferred to distillation columns, where the ethanol is removed from the silage. The ethanol is dehydrated to about 200 proof using a molecular sieve system. A denaturant such as gasoline is added to render the product undrinkable. The product is then ready to ship to gasoline retailers or terminals. The remaining silage is processed into a highly nutritious livestock feed known as distiller's dried grains and solubles (DDGS). The carbon dioxide released from the process is used to carbonate beverages and to manufacture dry ice . Wet milling In wet milling, the corn grain is separated into components by steeping in dilute sulfuric acid for 24 to 48 hours. The slurry mix then goes through a series of grinders to separate out the corn germ. The remaining components of fiber, gluten, and starch are segregated using screen, hydroclonic, and centrifugal separators. The corn starch and remaining water can be fermented into ethanol through a similar process as dry milling, dried and sold as modified corn starch, or made into corn syrup. The gluten protein and steeping liquor are dried to make a corn gluten meal that is sold to the livestock industry. The heavy steep water is also sold as a feed ingredient and used as an alternative to salt in the winter months. Corn oil is also extracted and sold. Environmental issues Corn ethanol results in lower greenhouse gas emissions than gasoline and is fully biodegradable, unlike some fuel additives such as MTBE. However, because energy to run many U.S. distilleries comes mainly from coal plants, there has been considerable debate on the sustainability of corn ethanol in replacing fossil fuels. Additional controversy relates to the large amount of arable land required for crops and its impact on grain supply and direct and indirect land use change effects. Other issues relate to pollution, water use for irrigation and processing, energy balance, and emission intensity for the full life cycle of ethanol production. Greenhouse gas emissions Several full life cycle studies have found that corn ethanol reduces well-to-wheel greenhouse gas emissions by up to 50 percent compared to gasoline. However, other research has concluded that corn ethanol produces more carbon emissions per unit of energy than gasoline, when factoring in fertilizer use and land use changeEthanol-blended fuels currently in the market – whether E10 or E85 – meet stringent tailpipe emission standards. Croplands One of the main controversies involving corn ethanol production is the necessity for arable cropland to grow the corn for ethanol, which is then not available to grow corn for human or animal consumption. In the United States, 40% of the acreage designated for corn grain is used for corn ethanol production, of which 25% was converted to ethanol after accounting for co-products, leaving only 60% of the crop yield for human or animal consumption. Economic impact of corn ethanol The Renewable Fuels Association (RFA), the ethanol industry's lobbying group, claims that ethanol production increases the price of corn by increasing demand. The RFA claims that ethanol production has positive economic effect for US farmers, but it does not elaborate on the effect for other populations where field corn is part of the staple diet. An RFA lobby document states that "In a January 2007 statement, the USDA Chief Economist stated that farm program payments were expected to be reduced by some $6 billion due to the higher value of a bushel of corn. Corn production in 2009 reached over 13.2 billion bushels, and a per acre yield jumped to over 165 bushels per acre. In the United States, 5.05 billion bushels of corn were used for ethanol production out of 14.99 billion bushels produced in 2020, according to USDA data. According to the U.S. Department of Energy's Alternative Fuels Data Center, "The increased ethanol [production] seems to have come from the increase in overall corn production and a small decrease in corn used for animal feed and other residual uses. The amount of corn used for other uses, including human consumption, has stayed fairly consistent from year to year." This does not prove there was not an impact on food supplies: Since U.S. corn production doubled (approximately) between 1987 and 2018, it is probable that some cropland previously used to grow other food crops is now used to grow corn. It is also possible or probable that some marginal land has been converted or returned to agricultural use. That may have negative environmental impacts. Alternative biomass for ethanol Remnants from food production such as corn stover could be used to produce ethanol instead of food corn. Ethanol derived from sugar-beet as used in Europe or sugar-cane in Brazil has up to 80% reduction in well-to-wheel carbon dioxide. The use of cellulosic biomass to produce ethanol is considered second generation biofuel that are considered by some to be a solution to the food versus fuel debate, and has the potential to cut life cycle greenhouse gas emissions by up to 86 percent relative to gasoline. See also Cellulosic ethanol Ethanol fuel Ethanol fuel in the United States References External links Better Than Corn? Algae Set to Beat Out Other Biofuel Feedstocks (Worldwatch Institute). The End of Cheap Food. (Cover Story). 2007 Economist 385(8558):11–12. Energy Policy Act of 2005. 2005 Public Law 109-58. Pimentel, David. 2009 Corn Ethanol as Energy. Harvard International Review 31(2):50–52. Scully, Vaughan. 2007 Effects of the Biofuel Boom. BusinessWeek Online:26-26. Waltz, Emily. 2008 Cellulosic Ethanol Booms Despite Unproven Business Models. Nature Biotechnology 26(1):8–9. Cornstarch Substitute Martin,Jeremy. 2017. Fueling a Clean Transportation Future. Union of Concerned Scientists.
wash
WASH (or Watsan, WaSH) is an acronym that stands for "water, sanitation and hygiene". It is used widely by non-governmental organizations and aid agencies in developing countries. The purposes of providing access to WASH services include achieving public health gains, improving human dignity in the case of sanitation, implementing the human right to water and sanitation, reducing the burden of collecting drinking water for women, reducing risks of violence against women, improving education and health outcomes at schools and health facilities, and reducing water pollution. Access to WASH services is also an important component of water security. Universal, affordable and sustainable access to WASH is a key issue within international development and is the focus of the first two targets of Sustainable Development Goal 6 (SDG 6). Targets 6.1 and 6.2 aim at equitable and accessible water and sanitation for all. In 2017, it was estimated that 2.3 billion people live without basic sanitation facilities and 844 million people live without access to safe and clean drinking water.The WASH-attributable burden of disease and injuries has been studied in depth. Typical diseases and conditions associated with lack of WASH include diarrhea, malnutrition and stunting, in addition to neglected tropical diseases. Lack of WASH poses additional health risks for women, for example during pregnancy, or in connection with menstrual hygiene management. Chronic diarrhea can have long-term negative effects on children, in terms of both physical and cognitive development. Still, collecting precise scientific evidence regarding health outcomes that result from improved access to WASH is difficult due to a range of complicating factors. Scholars suggest a need for longer-term studies of technology efficacy, greater analysis of sanitation interventions, and studies of combined effects from multiple interventions in order to better analyze WASH health outcomes.Access to WASH needs to be provided at the household level but also in non-household settings like schools, healthcare facilities, workplaces (including prisons), temporary use settings, mass gatherings, and for dislocated populations. In schools, group handwashing facilities and behaviors are a promising approach to improve hygiene. Lack of WASH facilities at schools can prevent students (especially girls) from attending school, reducing their educational achievements and future work productivity. Challenges for providing WASH services include providing services to urban slums, failures of WASH systems (e.g. leaking water distribution systems), water pollution and the impacts of climate change. Planning approaches for better, more reliable and equitable access to WASH include: National WASH plans and monitoring (including gender mainstreaming), integrated water resources management (IWRM) and, more recently, improving climate resilience of WASH services. Adaptive capacity in water management systems can help to absorb some of the impacts of climate-related events and increase climate resilience.: 25  Stakeholders at various scales, i.e. from small urban utilities to national governments, need to have access to reliable information about the regional climate and any expected changes due to global climate change. Components The concept of WASH groups together water supply (access to drinking water services), sanitation, and hygiene because the impact of deficiencies in each area overlap strongly (WASH is an acronym that uses the first letters of "water, sanitation and hygiene"). WASH consists of access to drinking water services, sanitation services and hygiene. Drinking water services A "safely managed drinking water service" is "one located on premises, available when needed and free from contamination". The terms '"improved water source" and "unimproved water source" were coined in 2002 as a drinking water monitoring tool by the JMP of UNICEF and WHO. The term "improved water source" refers to "piped water on premises (piped household water connection located inside the user's dwelling, plot or yard), and other improved drinking water sources (public taps or standpipes, tube wells or boreholes, protected dug wells, protected springs, and rainwater collection)".Access to drinking water is included in Target 6.1 of Sustainable Development Goal 6 (SDG 6), which states: "By 2030, achieve universal and equitable access to safe and affordable drinking water for all". This target has one indicator: Indicator 6.1.1 is the "Proportion of population using safely managed drinking water services". In 2017, 844 million people still lacked even a basic drinking water service.: 3  In 2019 it was reported that 435 million people used unimproved sources for their drinking water, and 144 million still used surface waters, such as lakes and streams.Drinking water can be sourced from the following water sources: surface water, groundwater or rainwater, in each case after collection, treatment and distribution. Desalinated seawater is another potential source for drinking water. People without access to safe, reliable domestic water supplies face lower water security at specific times throughout the year due to cyclical changes in water quantity or quality. For example, where access to water on-premises is not available, drinking water quality at the point of use (PoU) can be much worse compared to the quality at the point of collection (PoC). Correct household practices around hygiene, storage and treatment are therefore important. There are interactions between weather, water source and management, and these in turn impact on drinking water safety. Groundwater Groundwater provides critical freshwater supply, particularly in dry regions where surface water availability is limited. Globally, more than one-third of the water used originates from underground. In the mid-latitude arid and semi-arid regions lacking sufficient surface water supply from rivers and reservoirs, groundwater is critical for sustaining global ecology and meeting societal needs of drinking water and food production. The demand for groundwater is rapidly increasing with population growth, while climate change is imposing additional stress on water resources and raising the probability of severe drought occurrence.The anthropogenic effects on groundwater resources are mainly due to groundwater pumping and the indirect effects of irrigation and land use changes.Groundwater plays a central role in sustaining water supplies and livelihoods in sub-Saharan Africa. In some cases, groundwater is an additional water source that was not used previously.Reliance on groundwater is increasing in Sub-Saharan Africa as development programs work towards improving water access and strengthening resilience to climate change. In lower-income areas, groundwater supplies are typically installed without water quality treatment infrastructure or services. This practice is underpinned by an assumption that untreated groundwater is typically suitable for drinking due to the relative microbiological safety of groundwater compared to surface water; however, chemistry risks are largely disregarded. Chemical contaminants occur widely in groundwaters that are used for drinking but are not regularly monitored. Example priority parameters are fluoride, arsenic, nitrate, or salinity. Sanitation services Sanitation systems are grouped into several types: The ladder of sanitation services includes (from lowest to highest): open defecation, unimproved, limited, basic, safely managed.: 8  A distinction is made between sanitation facilities that are shared between two or more households (a "limited service") and those that are not shared (a "basic service"). The definition of improved sanitation facilities is: Those facilities designed to hygienically separate excreta from human contact.: 8 With regards to toilets, improved sanitation includes the following kind of toilets: Flush toilet, connection to a piped sewer system, connection to a septic system, flush or pour-flush to a pit latrine, pit latrine with slab, ventilated improved pit latrine, composting toilet.Access to sanitation services is included in Target 6.2 of Sustainable Development Goal 6 which is: "By 2030, achieve access to adequate and equitable sanitation and hygiene for all and end open defecation, paying special attention to the needs of women and girls and those in vulnerable situations." This target has one indicator: Indicator 6.2.1 is the "Proportion of population using (a) safely managed sanitation services and (b) a hand-washing facility with soap and water".In 2017, 4.5 billion people did not have toilets at home that can safely manage waste despite improvements in access to sanitation over the past decades. Approximately 600 million people share a toilet or latrine with other households and 892 million people practice open defecation.There are many barriers that make it difficult to achieve "sanitation for all". These include social, institutional, technical and environmental challenges. Therefore, the problem of providing access to sanitation services cannot be solved by focusing on technology alone. Instead, it requires an integrated perspective that includes planning, using economic opportunities (e.g. from reuse of excreta), and behavior change interventions. Fecal sludge management and sanitation workers Sanitation services would not be complete without safe fecal sludge management (FSM), which is the storage, collection, transport, treatment, and safe end use or disposal of fecal sludge.: 3  Fecal sludge is defined very broadly as what accumulates in onsite sanitation systems (e.g. pit latrines, septic tanks and container-based solutions) and specifically is not transported through a sewer.: 5  Sanitation workers are the people needed for cleaning, maintaining, operating, or emptying a sanitation technology at any step of the sanitation chain.: 2 Hygiene Hygiene is a broad concept. "Hygiene refers to conditions and practices that help to maintain health and prevent the spread of diseases." Hygiene is can comprise many behaviors, including handwashing, menstrual hygiene and food hygiene.: 18  In the context of WASH, handwashing with soap and water is regarded as a top priority in all settings, and has been chosen as an indicator for national and global monitoring of hygiene access. "Basic hygiene facilities" are those were people have a handwashing facility with soap and water available on their premises.: 18  Handwashing facilities can consist of a sink with tap water, buckets with taps, tippy-taps and portable basins.: 18 In the context of SDG 6, hygiene is included in the indicator for Target 6.2: "Proportion of population using [...] (b) a hand-washing facility with soap and water"In 2017, the global situation was reported as follows: Only 1 in 4 people in low-income countries had handwashing facilities with soap and water at home; only 14% of people in Sub-Saharan Africa have handwashing facilities. Worldwide, at least 500 million women and girls lack adequate, safe, and private facilities for managing menstrual hygiene.Approximately 40% of the world's population live without basic hand washing facilities with soap and water at home. Purposes The purposes of providing access to WASH services include achieving public health gains, improving human dignity in the case of sanitation, implementing the human right to water and sanitation, reducing the burden of collecting drinking water for women, reducing risks of violence against women, improving education and health outcomes at schools and health facilities, and reducing water pollution. Access to WASH services is also an important component of achieving water security.Improving access to WASH services can improve health, life expectancy, student learning, gender equality, and other important issues of international development. It can also assist with poverty reduction and socio-economic development. Health aspects Categories of health impacts Health impacts resulting from a lack of safe sanitation systems fall into three categories:: 2  Direct impact (infections): The direct impacts include fecal–oral infections (through the fecal–oral route), helminth infections and insect vector diseases (see also waterborne diseases, which can contaminate drinking water). For example, lack of clean water and proper sanitation can result in feces-contaminated drinking water and cause life-threatening diarrhea for infants. Sequelae (conditions caused by preceding infection): Conditions caused by preceding infection include stunting or growth faltering, consequences of stunting (obstructed labour, low birthweight), impaired cognitive function, pneumonia (related to repeated diarrhea in undernourished children), anemia (related to hookworm infections). Broader well-being: Anxiety, sexual assault (and related consequences), adverse birth outcomes as well as long-term problems such as school absence, poverty, decreased economic productivity, antimicrobial resistance.: 2 WASH-attributable burden of diseases and injuries The WHO has investigated which proportion of death and disease worldwide can be attributed to insufficient WASH services. In their analysis they focus on the following four health outcomes: diarrhea, acute respiratory infections, undernutrition, and soil-transmitted helminthiases (STHs).: vi  These health outcomes are also included as an indicator for achieving Sustainable Development Goal 3 ("Good Health and Wellbeing"): Indicator 3.9.2 reports on the "mortality rate attributed to unsafe water, sanitation, and lack of hygiene". In 2023, WHO summarized the available data with the following key findings: "In 2019, use of safe WASH services could have prevented the loss of at least 1.4 million lives and 74 million disability-adjusted life years (DALYs) from four health outcomes. This represents 2.5% of all deaths and 2.9% of all DALYs globally.": vi  Of the four health outcomes studied, it was diarrheal disease that had the most striking correlation, namely the highest number of "attributable burden of disease": over 1 million deaths and 55 million DALYs from diarrheal diseases was linked with lack of WASH. Of these deaths, 564,000 deaths were linked to unsafe sanitation in particular. Acute respiratory infections was the second largest cause of WASH-attributable burden of disease in 2019, followed by undernutrition and soil-transmitted helminthiases. The latter does not lead to such high death numbers (in comparison) but is fully connected to unsafe WASH: its "population-attributable fraction" is estimated to be 100%.: vi The connection between lack of WASH and burden of disease is primarily one of poverty and poor access in developing countries: "the WASH-attributable mortality rates were 42, 30, 4.4 and 3.7 deaths per 100 000 population in low-income, lower-middle income, upper-middle income and high-income countries, respectively.": vi  The regions most affected are in the WHO Africa and South-East Asia regions. Here, between 66 and 76% of the diarrheal disease burden could be prevented if access to safe WASH services was provided.: vi Most of the diseases resulting from lack of sanitation have a direct relation to poverty. For example, open defecation – which is the most extreme form of "lack of sanitation" – is a major factor in causing various diseases, most notably diarrhea and intestinal worm infections.An earlier report by World Health Organization which analyzed data up to 2016 had found higher values: "The WASH-attributable disease burden amounts to 3.3% of global deaths and 4.6% of global DALYs. Among children under 5 years, WASH-attributable deaths represent 13% of deaths and 12% of DALYs. Worldwide, 1.9 million deaths and 123 million DALYs could have been prevented in 2016 with adequate WASH." An even earlier study from 2002 had estimated even higher values, namely that up to 5 million people die each year from preventable waterborne diseases. These changes in the estimates of death and disease can partly be explained by the progress that has been achieved in some countries in improving access to WASH. For example, several large Asian countries (China, India, Indonesia) have managed to increase the "safely managed sanitation services" in their country from the year 2015 to 2020 by more than 10 percentage points.: 26 List of diseases There are at least the following twelve diseases which are more likely to occur when WASH services are inadequate: There are also other diseases where adverse health outcomes are likely to be linked to inadequate WASH but which are not yet quantified. These include for example: Diarrhea, malnutrition and stunting Diarrhea is primarily transmitted through fecal–oral routes. In 2011, infectious diarrhea resulted in about 0.7 million deaths in children under five years old and 250 million lost school days. This equates to about 2000 child deaths per day. Children suffering from diarrhea are more vulnerable to become underweight (due to stunted growth). This makes them more vulnerable to other diseases such as acute respiratory infections and malaria. Chronic diarrhea can have a negative effect on child development (both physical and cognitive).Numerous studies have shown that improvements in drinking water and sanitation (WASH) lead to decreased risks of diarrhea. Such improvements might include for example use of water filters, provision of high-quality piped water and sewer connections. Diarrhea can be prevented - and the lives of 525,000 children annually be saved (estimate for 2017) - by improved sanitation, clean drinking water, and hand washing with soap. In 2008 the same figure was estimated as 1.5 million children.The combination of direct and indirect deaths from malnutrition caused by unsafe water, sanitation and hygiene (WASH) practices was estimated by the World Health Organization in 2008 to lead to 860,000 deaths per year in children under five years of age. The multiple interdependencies between malnutrition and infectious diseases make it very difficult to quantify the portion of malnutrition that is caused by infectious diseases which are in turn caused by unsafe WASH practices. Based on expert opinions and a literature survey, researchers at WHO arrived at the conclusion that approximately half of all cases of malnutrition (which often leads to stunting) in children under five is associated with repeated diarrhea or intestinal worm infections as a result of unsafe water, inadequate sanitation or insufficient hygiene. Neglected tropical diseases Water, sanitation and hygiene interventions help to prevent many neglected tropical diseases (NTDs), for example soil-transmitted helminthiasis. Approximately two billion people are infected with soil-transmitted helminths worldwide. This type of intestinal worm infection is transmitted via worm eggs in feces which in turn contaminate soil in areas where sanitation is poor. An integrated approach to NTDs and WASH benefits both sectors and the communities they are aiming to serve. This is especially true in areas that are endemic with more than one NTD.Since 2015, the World Health Organization (WHO) has a global strategy and action plan to integrate WASH with other public health interventions in order to accelerate elimination of NTDs. The plan aimed to intensify control or eliminate certain NTDs in specific regions by 2020. It refers to the NTD roadmap milestones that included for example eradication of dracunculiasis by 2015 and of yaws by 2020, elimination of trachoma and lymphatic filariasis as public health problems by 2020, intensified control of dengue, schistosomiasis and soil-transmitted helminthiases. The plan consists of four strategic objectives: improving awareness of benefits of joint WASH and NTD actions; monitoring WASH and NTD actions to track progress; strengthening evidence of how to deliver effective WASH interventions; and planning, delivering and evaluating WASH and NTD programs with involvement of all stakeholders. Additional health risks for women Women tend to face a higher risk of diseases and illness due to limited WASH access. Heavily pregnant women face severe hardship walking to and from a water collection site. The consumption of unclean water leading to infection in the fetus accounts for 15% of deaths for women during pregnancy globally. Illnesses and diseases that can come from poor menstrual hygiene management become more likely when clean water and toilets are unavailable. In Bangladesh and India, women rely on old cloths to absorb menstrual blood and use water to clean and reuse them. Without access to clean water and hygiene, these women my experience unnecessary health problems in connection with their periods. Health risks for sanitation workers Effects of climate change on health risks Global climate change can increase the health risks for some of the infectious diseases mentioned above, see below in the section on negative impacts of climate change. In non-household settings Non-household settings for WASH include the following six types: schools, health care facilities, workplaces (including prisons), temporary use settings, mass gatherings, and dislocated populations. In schools More than half of all primary schools in the developing countries with available data do not have adequate water facilities and nearly two thirds lack adequate sanitation. Even where facilities exist, they are often in poor condition. Children are able to more fully participate in school when there is improved access to water.: 24 Lack of WASH facilities can prevent students from attending school, particularly female students. Strong cultural taboos around menstruation, which are present in many societies, coupled with a lack of Menstrual Hygiene Management services in schools, results in girls staying away from school during menstruation.Reasons for missing or poorly maintained water and sanitation facilities at schools in developing countries include lacking inter-sectoral collaboration; lacking cooperation between schools, communities and different levels of government; as well as a lack in leadership and accountability. Outcomes from improved WASH at schools WASH in schools, sometimes called SWASH or WinS, significantly reduces hygiene-related disease, increases student attendance and contributes to dignity and gender equality. WASH in schools contributes to healthy, safe and secure school environments. It can also lead to children becoming agents of change for improving water, sanitation and hygiene practices in their families and communities.For example, data from over 10,000 schools in Zambia was analyzed in 2017 and confirmed that improved sanitation provision in schools was correlated with high female-to-male enrolment ratios, and reduced repetition and drop-out ratios, especially for girls. Methods to improve WASH in schools Methods to improve the situation of WASH infrastructure at schools include on a policy level: broadening the focus of the education sector, establishing a systematic quality assurance system, distributing and using funds wisely. Other practical recommendations include: have a clear and systematic mobilization strategy, support the education sector to strengthen intersectoral partnerships, establish a constant monitoring system which is located within the education sector, educate the educators and partner with the school management.The support provided by development agencies to the government at national, state and district levels is helpful to gradually create what is commonly referred to as an enabling environment for WASH in schools.Success also hinges on local-level leadership and a genuine collective commitment of school stakeholders towards school development. This applies to students and their representative clubs, headmaster, teachers and parents. Furthermore, other stakeholders have to be engaged in their direct sphere of influence, such as: community members, community-based organizations, educations official, local authorities. Group handwashing Supervised daily group handwashing in schools is an effective strategy for building good hygiene habits, with the potential to lead to positive health and education outcomes for children. This has for example been implemented in the "Essential Health Care Program" by the Department of Education in the Philippines. Mass deworming twice a year, supplemented by washing hands daily with soap and brushing teeth daily with fluoride, is at the core of this national program. It has also been successfully implemented in Indonesia. In healthcare facilities The provision of adequate water, sanitation and hygiene is an essential part of providing basic health services in healthcare facilities. WASH in healthcare facilities aids in preventing the spread of infectious diseases as well as protects staff and patients. WASH services in health facilities in developing countries are currently often lacking.According to the World Health Organization, data from 54 countries in low and middle income settings representing 66,101 health facilities show that 38% of health care facilities lack improved water sources, 19% lack improved sanitation while 35% lack access to water and soap for handwashing. The absence of basic WASH amenities compromises the ability to provide routine services and hinders the ability to prevent and control infections. The provision of water in health facilities was the lowest in Africa, where 42% of healthcare facilities lack an improved source of water on-site or nearby. The provision of sanitation is lowest in the Americas with 43% of health care facilities lacking adequate services.In 2019, WHO estimated that: "One in four health care facilities lack basic water services, and one in five have no sanitation service – impacting 2.0 and 1.5 billion people, respectively." Furthermore, it is estimated that "health care facilities in low-income countries are at least three times as likely to have no water service as facilities in higher resource settings". This is thought to contribute to the fact that maternal sepsis is twice as great in developing countries as it is in high income countries.: vii Barriers to providing WASH in health care facilities include: Incomplete standards, inadequate monitoring, disease-specific budgeting, disempowered workforce, poor WASH infrastructure.: 14 The improvement of WASH standards within health facilities needs to be guided by national policies and standards as well as an allocated budget to improve and maintain services. A number of solutions exist that can considerably improve the health and safety of both patients and service providers at health facilities: Availability of safe water for drinking but also for use in surgery and deliveries, food preparation, bathing and showering: There is a need for improved water pump systems within health facilities. Improved handwashing practices among healthcare staff must be implemented. This requires functional hand washing stations at strategic points of care within the health facilities, i.e. at points of care and at toilets. Waste system management: Proper health care waste management and the safe disposal of excreta and waste water is crucial to preventing the spread of disease. Hygiene promotion for patients, visitors and staff. Accessible and clean toilets, separated by gender, in sufficient numbers for staff, patients and visitors.Improving access to hand washing and sanitation facilities in healthcare settings will significantly reduce infection and mortality rates, particularly in maternal and child health. In prisons In developing countries, prison buildings are very often overcrowded and dilapidated.: 12  A report by ICRC states that "Measures depriving persons of their freedom must in no way, whatever the circumstances, be made more severe by treatment or material conditions of detention which undermine the dignity and the rights of the individual.".: 12 The water supply systems and sanitary facilities in prisons are often insufficient to meet the needs of the prison population in cases where the number of detainees exceeds a prison's capacity. Overuse of the facilities results in rapid deterioration. The budget allocated by the State for prisons is often insufficient to cover the detainees' needs in terms of food and medical care, let alone upkeep of water and sanitation facilities.: 12  Nevertheless, even with limited funds, it is possible to maintain or renovate decaying infrastructure with the right planning approaches and suitable low-cost water supply and sanitation options. Impacts on women Impacts on women and girls that come from lack of proper facilities include the burden of time required to collect water from distant water sources when there is no water access on the premises - as well as specific hygiene and privacy needs related to urination, menstruation, pregnancy, and birth. There are also restrictive gender norms for water-related occupations. Violence against women is another problem that can come from the fact that to access water or toilets women might have to leave the premises and travel some distances, often alone or in the dark. Time required to collect water The lack of accessible, sufficient, clean and affordable water supply has adverse impacts specifically related to women in developing nations. It is estimated that 263 million people worldwide spent over 30 minutes per round trip to collect water from an improved source.: 3  In sub-Saharan Africa, women and girls carry water containers for an average of three miles each day, spending 40 billion hours per year on water collection (walking to the water source, waiting in line, walking back).: 14  The time to collect water can come at the expense of education, income generating activities, cultural and political involvement, and rest and recreation.: 2  For example, in low-income areas of Nairobi, women carry 44 pound containers of water back to their homes, taking anywhere between an hour and several hours to wait and collect the water.: 733 In many places of the world, getting and providing water is considered "women's work," so gender and water access are intricately linked.: 256  Water gathering and supply to family units remains primarily a woman's task in less developed countries where water gathering is considered a main chore.: 256  This water work is also largely unpaid household work based on patriarchal gender norms and often related to domestic work, such as laundry, cooking and childcare.: 5  Areas that rely on women to primarily collect water include countries in Africa, South Asia and in the Middle East.: 4 Gender norms for occupations Gender norms can negatively affect how men and women access water through such behavior expectations along gender lines—for example, when water collection is a woman's chore, men who collect water may face discrimination for performing perceived women's work. Women are likely to be deterred from entering water utilities in developing countries because "social norms prescribe that it is an area of work that is not suitable for them or that they are incapable of performing well".: 13  Nevertheless, a study by World Bank in 2019 has found that the proportion of female water professionals has grown in the past few years.: x  In many societies, the task of cleaning toilets falls to women or children, which can increase their exposure to disease.: 19 Violence against women Women and girls usually bear the responsibility for collecting water, which is often very time-consuming and arduous, and can also be dangerous for them. Women and girls who collect water may also face physical assault and sexual assault along the way (violence against women). This includes vulnerability to rape when collecting water from distant areas, domestic violence over the amount of water collected, and fights over scarce water supply. A study in India, for example, found that women felt intense fear of sexual violence when accessing water and sanitation services. A similar study in Uganda also found that women reported to feel a danger for their security whilst journeying to toilets particularly at night. Challenges Equitable access to drinking water supply There are inequalities in access to water, sanitation and hygiene services.: 11  Such inequalities are for example related to income level and gender. In 2019 in 24 countries where disaggregated data was available, basic water coverage among the richest wealth quintile was at least twice as high as coverage among the poorest quintile. For example, in Bangladesh, minority ethnic groups have lower levels of access to WASH than the rest of the Bengali population. This is due to "structural racial discrimination" in Bangladesh.Access to WASH services also varies internally within nations depending on socio-economic status, political power, and level of urbanization. In 2004 it was found that urban households are 30% and 135% more likely to have access to improved water sources and sanitation respectively, as compared to rural areas.The human rights to water and sanitation prohibit discrimination on the grounds of "race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth, disability or other status". These are all dimensions of inequality in WASH services.: 13 Urban low income areas There are three main barriers to improvement of urban services in slum areas: Firstly, insufficient supply, especially of networked services. Secondly, there are usually demand constraints that limit people's access to these services (for example due to low willingness to pay). Thirdly, there are institutional constraints that prevent the poor from accessing adequate urban services. Water pollution Water supply sources include surface water and groundwater. These important water resources are often at risk of being polluted or overused. Failures of WASH systems The failures of water supply systems (including water points, wells and boreholes) and sanitation systems have been well documented. This has been attributed to financial costs, inadequate technical training for operations and maintenance, poor use of new facilities and taught behaviors, and a lack of community participation and ownership. The poorest populations often cannot afford fees required for operation and maintenance of WASH infrastructure, preventing them from benefitting even when systems do exist.Contamination of water in distribution systems is a challenge and can contribute to the spread of waterborne diseases. Working conditions of sanitation workers Effectiveness of WASH interventions on health outcomes There is debate in the academic literature about the effectiveness on health outcomes when implementing WASH programs in low- and middle-income countries. Many studies provide poor quality evidence on the causal impact of WASH programs on health outcomes of interest. The nature of WASH interventions is such that high quality trials, such as randomized controlled trials (RCTs), are expensive, difficult and in many cases not ethical. Causal impact from such studies are thus prone to being biased due to residual confounding. Blind studies of WASH interventions also pose ethical challenges and difficulties associated with implementing new technologies or behavioral changes without participant's knowledge. Moreover, scholars suggest a need for longer-term studies of technology efficacy, greater analysis of sanitation interventions, and studies of combined effects from multiple interventions in order to more sufficiently gauge WASH health outcomes.Many scholars have attempted to summarize the evidence of WASH interventions from the limited number of high quality studies. Hygiene interventions, in particular those focusing on the promotion of handwashing, appear to be especially effective in reducing morbidity. A meta-analysis of the literature found that handwashing interventions reduced the relative risk of diarrhea by approximately 40%. Similarly, handwashing promotion has been found to be associated with a 47% decrease in morbidity. However, a challenge with WASH behavioral intervention studies is an inability to ensure compliance with such interventions, especially when studies rely on self-reporting of disease rates. This prevents researchers from concluding a causal relationship between decreased morbidity and the intervention. For example, researchers may conclude that educating communities about handwashing is effective at reducing disease, but cannot conclude that handwashing reduces disease. Point-of-use water supply and point-of-use water quality interventions also show similar effectiveness to handwashing, with those that include provision of safe storage containers demonstrating increased disease reduction in infants.Specific types of water quality improvement projects can have a protective effect on morbidity and mortality. A randomized control trial in India concluded that the provision of chlorine tablets for improving water quality led to a 75% decrease in incidences of cholera among the study population. A quasi-randomized study on historical data from the United States also found that the introduction of clean water technologies in major cities was responsible for close to half the reduction in total mortality and over three-quarters of the reduction in infant mortality. Distributing chlorine products, or other water disinfectants, for use in the home may reduce instances of diarrhea. However, most studies on water quality improvement interventions suffer from residual confounding or poor adherence to the mechanism being studied. For instance, a study conducted in Nepal found that adherence to the use of chlorine tablets or chlorine solution to purify water was as low as 18.5% among program households. A study on a water well chlorination program in Guinea-Bissau in 2008 reported that families stopped treating water within their households because of the program which consequently increased their risk of cholera. It was concluded that well chlorination without proper promotion and education led to a false sense of security.Studies on the effect of sanitation interventions alone on health are rare. When studies do evaluate sanitation measures, they are mostly included as part of a package of different interventions. A pooled analysis of the limited number of studies on sanitation interventions suggest that improving sanitation has a protective effect on health. A UNICEF funded sanitation intervention (packaged into a broader WASH intervention) was also found to have a protective effect on under-five diarrhea incidence but not on household diarrhea incidence. Climate change aspects Greenhouse gas emissions Water and sanitation services contribute to greenhouse gas emissions. These emissions are grouped into three scopes in the international greenhouse gas protocol: direct emissions, as well as two types of indirect emissions (see below).: 9 Direct emissions (Scope 1) Scope 1 includes "direct emissions resulting directly from the activity". In the WASH sector, this is methane and nitrous oxide emissions during wastewater and sewage sludge treatment. Sanitation services produce about 2–6% of global human-caused methane emissions. Septic tanks, pit latrines, anaerobic lagoons, anaerobic digesters are all anaerobic treatment processes that emit methane which may or may not be captured (in the case of septic tanks it is usually not captured). It has been estimated, using data from 2012 and 2013, that "wastewater treatment in centralized facilities contributes alone some 3% of global nitrous oxide emissions and 7% of anthropogenic methane emissions".: 11  Data from 2023 from centralized sewage treatment plants in the United States indicate that methane emissions are about twice the estimates provided by IPCC in 2019, i.e. 10.9 ± 7.0 compared to 4.3-6.1 MMT (million metric tons) CO2-eq/yr.Current methods for estimating sanitation emissions underestimate the significance of methane emissions from non-sewered sanitation systems (NSSS). This is despite the fact that such sanitation systems are prevalent in many countries. NSSS play a vital role in the safe management of fecal sludge and account for approximately half of all existing sanitation provisions. The global methane emissions from NSSS in 2020 was estimated to be 377 Mt CO2e/year or 4.7% of global anthropogenic methane emissions. This is comparable to the greenhouse gas emissions from conventional wastewater treatment plants. Therefore, the GHG emissions from the non-sewered sanitation systems are a non-negligible source. India and China contribute extensively to methane emissions of NSSS because of their large populations and NSSS utilization. Indirect emissions associated with the energy required (Scope 2) Scope 2 includes "indirect emissions associated with the energy required by the activity". Companies that deal with water and wastewater services need energy for various processes. They use the energy mix that is available in the country. The higher the proportion of fossil fuels in the energy mix is, the higher the GHG emissions under Scope 2 will be high too.: 12  The processes that need energy include: water abstraction (e.g. groundwater pumping), drinking water storage, water conveyance, water treatment, water distribution, treatment of wastewater, water end use (e.g. water heating), desalination and wastewater reuse.: 20–24  For example, electrical energy is needed for pumping of sewage and for mechanical aeration in activated sludge treatment plants. When looking at the emissions from the sanitation and wastewater sector most people focus on treatment systems, particularly treatment plants. This is because treatment plants require considerable energy input and are estimated to account for 3% of global electricity consumption. This makes sense for high-income countries, where wastewater treatment is the biggest energy consumer compared to other activities of the water sector.: 23  The aeration processes that are used in many secondary treatment processes are particularly energy intensive (using about 50% of the total energy required for treatment).: 24  The amount of energy needed to treat wastewater depends on several factors: wastewater quantity and quality (i.e. how much and how polluted is it), treatment level required which in turn influences the type of treatment process that gets selected.: 23  The energy efficiency of the treatment process is another factor.: 23 Indirect emissions related to the activity but caused by other organizations (Scope 3) Scope 3 includes "indirect emissions related to the activity but caused by other organizations". The indirect emissions under Scope 3 are difficult to assess in a standardized way. They include for example emissions from constructing infrastructure, from the manufacture of chemicals that are needed in the treatment process and from the management of the by-product sewage sludge.: 12 Reducing greenhouse gas emissions Solutions exist to reduce the greenhouse gas emissions of water and sanitation services. These solutions into three categories which partly overlap: Firstly "reducing water and energy consumption through lean and efficient approaches"; secondly "embracing circular economy to produce energy and valuable products"; and thirdly by "planning to reduce GHG emissions through strategic decisions".: 28  The mentioned lean and efficient approaches include for example finding ways to reduce water loss from water networks and to reduce infiltration of rainwater or groundwater into sewers.: 29  Also, incentives can to encourage households and industries to reduce their water consumption and their energy requirements for water heating.: 31  There is another method to reduce the energy requirements for the treatment of raw water to make drinking water out of it: protecting the quality of the source water better.: 32 Methods that fall into the category of circular economy include: Reusing water, nutrients and materials; Low-carbon energy production (e.g. solar power on roofs of utility buildings, recovery of waste heat from wastewater, producing hydro-electricity by installing micro-turbines, producing energy from biosolids and sewage sludge.: 33–37  Strategic decisions around reducing GHG emissions include: awareness raising and education, governance that supports changing practices, providing economic incentive to conserve water and reduce consumption, and finally choosing low-carbon energy and supplies.: 38–39 Negative impacts of climate change The effects of climate change can have negative impacts on existing sanitation services in several ways, for example by damage and loss of services from floods and reduced carrying capacity of waters receiving wastewater.: 23  The weather and climate-related aspects (variability, seasonality and extreme weather events) have always had an impact on the delivery of sanitation services.: 3  But now, extreme weather events, such as floods and droughts, are generally increasing in frequency and intensity due to climate change in many regions.: 1157  They affect the operation of water supply, storm drainage and sewerage infrastructure, and wastewater treatment plants.Changes in the frequency and intensity of climate extremes could compound current challenges as water availability becomes more uncertain, and health risks increase due to contaminated water sources. The effects of climate change can result in a decrease of water availability, an increase of water necessity, damage to WASH facilities, and increased water contamination from pollutants.: 23  Due to these impacts, climate change can "exacerbate many WASH-related risks and diseases".: 23 Climate change poses increased risks to WASH systems, particular in Sub-Saharan Africa where access to safely managed basic sanitation is low. In that region, it is the poorly managed WASH systems, for example in informal settlements, which make people more vulnerable to the effects of climate change than people elsewhere.In terms of the water cycle, climate change can affect the amounts of soil infiltration, deeper percolation, and hence groundwater recharge. Also, rising temperature increases evaporative demand over land, which limits the amount of water to replenish groundwater. Influence of climate change on waterborne diseases Climate change adaptation Adaptation efforts in the WASH sector include for example protection of local water resources (as these resources become source water for drinking water supply) and investigating improvements to the water supply and storage strategy. It might also be necessary to adjust the utility's planning and operation.: 41  Climate change adaptation policies need to consider the risks from extreme weather events. The required adaptation measures need to consider measures for droughts and those for floods.: 61  Adaptation measures for droughts include for example: reduce leakages in a pro-active manner, communicate restrictions on water use to consumers. Adaptation measures for floods include for example: Review the siting of the water and wastewater treatment plants in floodplains, minimize the impact of floodwater on operational equipment.: 61 Nature-based solutions (NbS) can play an important role for climate change adaptation approaches of water and sanitation services.: 45  This includes ecological restoration (which can improve infiltration and thus reduce flooding), ecological engineering for wastewater treatment, green infrastructure for stormwater management, and measures for natural water retention.: 45 Most National Adaptation Plans published by the UN Framework Convention for Climate Change include measures to improve sanitation and hygiene.Engineers and planners need to adapt design standards for water and sanitation systems to account for the changing climate conditions. Otherwise these infrastructure systems will be more and more vulnerable in future. The same applies for other key infrastructure systems such as transport, energy and communications.: 13 Improving climate resilience Climate-resilient water services (or climate-resilient WASH) are services that provide access to high quality drinking water during all seasons and even during extreme weather events. Climate resilience in general is the ability to recover from, or to mitigate vulnerability to, climate-related shocks such as floods and droughts. Climate resilient development has become the new paradigm for sustainable development. This concept thus influences theory and practice across all sectors globally. This is particularly true in the water sector, since water security is closely connected to climate change. On every continent, governments are now adopting policies for climate resilient economies. International frameworks such as the Paris Agreement and the Sustainable Development Goals are drivers for such initiatives.Several activities can improve water security and increase resilience to climate risks: Carrying out a detailed analysis of climate risk to make climate information relevant to specific users; developing metrics for monitoring climate resilience in water systems (this will help to track progress and guide investments for water security); and using new institutional models that improve water security.Climate resilient policies can be useful for allocating water, keeping in mind that less water may be available in future. This requires a good understanding of the current and future hydroclimatic situation. For example, a better understanding of future changes in climate variability leads to a better response to their possible impacts.To build climate resilience into water systems, people need to have access to climate information that is appropriate for their local context.: 59  Climate information products are useful if they cover a wide range of temporal and spatial scales, and provide information on regional water-related climate risks.: 58  For example, government staff need easy access to climate information to achieve better water management.Four important activities to achieve climate resilient WASH services include: First, a risk analysis is performed to look at possible implications of extreme weather events as well as preventive actions.: 4  Such preventive actions can include for example elevating the infrastructure to be above expected flood levels. Secondly, managers assess the scope for reducing greenhouse gas emissions and put in place suitable options, e.g. using more renewable energy sources. Thirdly, the water utilities ensure that water sources and sanitation services are reliable at all times during the year, also during times of droughts and floods. Finally, the management and service delivery models are strengthened so that they can withstand a crisis.: 5 To put climate resilience into practice and to engage better with politicians, the following guide questions are useful: "resilience of what, to what, for whom, over what time frame, by whom and at what scale?". For example, "resilience of what?" means thinking beyond infrastructure but to also include resilience of water resources, local institutions and water users. Another example is that "resilience for whom?" speaks about reducing vulnerability and preventing negative developments: Some top-down interventions that work around power and politics may undermine indigenous knowledge and compromise community resilience. Adaptive capacity for climate resilience Adaptive capacity in water management systems can help to absorb some of the impacts of climate-related events and increase climate resilience.: 25  Stakeholders at various scales, i.e. from small urban utilities to national governments, need to have access to reliable information which details regional climate and climate change. For example, context-specific climate tools can help national policy makers and sub-national practitioners to make informed decisions to improve climate resilience. A global research program called REACH (led by the University of Oxford and funded by the UK Government's Foreign, Commonwealth & Development Office) is developing and using such climate tools for Kenya, Ethiopia and Bangladesh during 2015 to 2024. Planning approaches National WASH plans and monitoring UN-Water carries out the Global Analysis and Assessment of Sanitation and Drinking-Water (GLAAS) initiative. This work examines the "extent to which countries develop and implement national policies and plans for WASH, conduct regular monitoring, regulate and take corrective action as needed, and coordinate these parallel processes with sufficient financial resources and support from strong national institutions."Many countries' WASH plans are not supported by the necessary financial and human resources. This hinders their implementation and intended outcomes for WASH service delivery. As of 2022, it is becoming more common for countries to include "climate change preparedness approaches" in their national WASH plans. Preparedness in this context means working on mitigation, adaptation and resilience of WASH systems.: 11  Still, most national policies on WASH services do not set out how to address climate risks and how to increase the resilience of infrastructure and management.: vii Gender mainstreaming The Dublin Statement on Water and Sustainable Development in 1992 included "Women Play a central part in the provision management and safeguarding of water" as one of four principles. In 1996, Worldbank published a Toolkit on Gender in Water and Sanitation. Gender-sensitive approaches to water and sanitation have proven to be cost effective. Water supply schemes in developing nations have shown higher success when planned and run with full participation of women in the affected communities.The United Nations Interagency Network on Women and Gender Equality (IANWGE) established the Gender and Water Task Force in 2003. The Task Force became a UN-Water Task Force and took responsibility for the gender component of International Water for Life Decade (2005-1015). The task force's mandate ended in 2015. History The history of water supply and sanitation is the topic of a separate article. The abbreviation WASH was used from the year 1988 onwards as an acronym for the Water and Sanitation for Health Project of the United States Agency for International Development. At that time, the letter "H" stood for health, not hygiene. Similarly, in Zambia the term WASHE was used in a report in 1987 and stood for Water Sanitation Health Education. An even older USAID WASH project report dates back to as early as 1981.From about 2001 onwards, international organizations active in the area of water supply and sanitation advocacy, such as the Water Supply and Sanitation Collaborative Council and the International Water and Sanitation Centre (IRC) in the Netherlands began to use WASH as an umbrella term for water, sanitation and hygiene. WASH has since then been broadly adopted as a handy acronym for water, sanitation and hygiene in the international development context. The term WatSan was also used for a while, especially in the emergency response sector such as with IFRC and UNHCR, but has not proven as popular as WASH. Society and culture Global goals Since 1990, the Joint Monitoring Program for Water Supply and Sanitation (JMP) of WHO and UNICEF has regularly produced estimates of global WASH progress. The JMP was already responsible for monitoring the UN's Millennium Development Goal (MDG) Target 7.C, which aimed to "halve, by 2015, the proportion of the population without sustainable access to safe drinking water and basic sanitation". This has been replaced in 2015 by the Sustainable Development Goal 6 (SDG 6), which is to "ensure availability and sustainable management of water and sanitation for all" by 2030. To establish a reference point from which progress toward achieving the SDGs could be monitored, the JMP produced "Progress on Drinking Water, Sanitation and Hygiene: 2017 Update and SDG Baselines".Expanding WASH coverage and monitoring in non-household settings such as schools, healthcare facilities, and work places, is included in Sustainable Development Goal 6.WaterAid International is a non-governmental organization (NGO) that works on improving the availability of safe drinking water in some the world's poorest countries.Sanitation and Water for All is a partnership that brings together national governments, donors, UN agencies, NGOs and other development partners. They work to improve sustainable access to sanitation and water supply. In 2014, 77 countries had already met the MDG sanitation target, 29 were on track and, 79 were not on-track. Awards Important awards for individuals or organizations working on WASH include for example the Stockholm Water Prize since 1991 and the Sarphati Sanitation Awards since 2013, for sanitation entrepreneurship. United Nations organs UNICEF - UNICEF's declared strategy is "to achieve universal and equitable access to safe and affordable drinking water for all". UNICEF includes WASH initiatives in their work with schools in over 30 countries. UN-Water - an interagency mechanism which "coordinates the efforts of UN entities and international organizations working on water and sanitation issues". Awareness raising through observance days The United Nation's International Year of Sanitation in 2008 helped to increase attention for funding of sanitation in WASH programs of many donors. For example, the Bill and Melinda Gates Foundation has increased their funding for sanitation projects since 2009, with a strong focus on reuse of excreta.Awareness raising for the importance of WASH takes place through several United Nations international observance days, namely World Water Day, Menstrual Hygiene Day, World Toilet Day and Global Handwashing Day. By country See also Human right to water and sanitation Water issues in developing countries Global Water Security & Sanitation Partnership Water Supply and Sanitation Collaborative Council Sustainable Sanitation Alliance References External links Water, Sanitation and Hygiene (WASH) - UNHCR website Global Water, Sanitation and Hygiene Home - Healthy Water - Centres for Disease Control and Prevention website Rural Water Supply Network (RWSN)
seasonal food
Seasonal food refers to the times of the year when the harvest or the flavour of a given type of food is at its peak. This is usually the time when the item is harvested, with some exceptions; an example being sweet potatoes which are best eaten several weeks after harvest. Seasonal food reduces the greenhouse gas emissions resulting from food consumption and is integral in a low carbon diet. Macrobiotic diets emphasize eating locally grown foods that are in season. History The seasonal food of Korea were formed against the backdrop of a natural environment where changes in farming life and four seasons were evident, and different depending on the failure, influenced by various geographical environments. In contrast, summer diet consisted of green beans radish, lettuces, chicories, aubergine, carrots, cucumber, gherkins, watercress, marrow, courgettes, and rice. The meat accompanied these vegetables consisted mainly of poultry, ostrich and beef products. Fruity desserts included fruits such as lemon, lime quinces, nectarines, mulberry, cherries, plums, apricot, grapes, pomegranates, watermelon, pears, apple, and melon. Meanwhile, the drinks involved syrups and jams. Fruit pastels, lemon, rose, jasmine, ginger and fennel.In autumn, meals included cabbage, cauliflower, carrots, celery, gourd, wheat, barley, millet, turnips, parsnips, onions, acorns, peanuts, pulses, and olive oil. Drinks incorporated aromatic herbs and flower distillations of essential oils. In the digital age, apps and websites track in-season food. Climate impact Use of food according to its seasonal availability can reduce the greenhouse gas emissions resulting from food consumption (food miles). According to a 2021 study backed by the United Nations, more than a third of global greenhouse gas emissions come from food production, processing, and packaging. Gallery See also Slow Food References External links BBC Good Food - Seasonality table (UK) BBC Food - In season section Seasonal food calendar (note: this site requires you to enter a New York zip code. 10003 is one that will work) SYUN - Japanese-English Syun 旬 Seasonal Dictionary with photo (JP)
sustainable energy
Energy is sustainable if it "meets the needs of the present without compromising the ability of future generations to meet their own needs." Most definitions of sustainable energy include considerations of environmental aspects such as greenhouse gas emissions and social and economic aspects such as energy poverty. Renewable energy sources such as wind, hydroelectric power, solar, and geothermal energy are generally far more sustainable than fossil fuel sources. However, some renewable energy projects, such as the clearing of forests to produce biofuels, can cause severe environmental damage. The role of non-renewable energy sources in sustainable energy has been controversial. Nuclear power is a low-carbon source whose historic mortality rates are comparable to those of wind and solar, but its sustainability has been debated because of concerns about radioactive waste, nuclear proliferation, and accidents. Switching from coal to natural gas has environmental benefits, including a lower climate impact, but may lead to a delay in switching to more sustainable options. Carbon capture and storage can be built into power plants to remove their carbon dioxide (CO2) emissions, but this technology is expensive and has rarely been implemented. Fossil fuels provide 85% of the world's energy consumption, and the energy system is responsible for 76% of global greenhouse gas emissions. Around 790 million people in developing countries lack access to electricity, and 2.6 billion rely on polluting fuels such as wood or charcoal to cook. Reducing greenhouse gas emissions to levels consistent with the 2015 Paris Agreement will require a system-wide transformation of the way energy is produced, distributed, stored, and consumed. The burning of fossil fuels and biomass is a major contributor to air pollution, which causes an estimated 7 million deaths each year. Therefore, the transition to a low-carbon energy system would have strong co-benefits for human health. Pathways exist to provide universal access to electricity and clean cooking in ways that are compatible with climate goals while bringing major health and economic benefits to developing countries. Climate change mitigation pathways have been proposed to limit global warming to 2 °C (3.6 °F). These pathways include phasing out coal-fired power plants, producing more electricity from clean sources such as wind and solar, and shifting towards using electricity instead of fossil fuels in sectors such as transport and heating buildings. For some energy-intensive technologies and processes that are difficult to electrify, many pathways describe a growing role for hydrogen fuel produced from low-emission energy sources. To accommodate larger shares of variable renewable energy, electrical grids require flexibility through infrastructure such as energy storage. To make deep reductions in emissions, infrastructure and technologies that use energy, such as buildings and transport systems, would need to be changed to use clean forms of energy and also conserve energy. Some critical technologies for eliminating energy-related greenhouse gas emissions are not yet mature. Wind and solar energy generated 8.5% of worldwide electricity in 2019. This share has grown rapidly while costs have fallen and are projected to continue falling. The Intergovernmental Panel on Climate Change (IPCC) estimates that 2.5% of world gross domestic product (GDP) would need to be invested in the energy system each year between 2016 and 2035 to limit global warming to 1.5 °C (2.7 °F). Well-designed government policies that promote energy system transformation can lower greenhouse gas emissions and improve air quality. In many cases, they also increase energy security. Policy approaches include carbon pricing, renewable portfolio standards, phase-outs of fossil fuel subsidies, and the development of infrastructure to support electrification and sustainable transport. Funding the research, development, and demonstration of new clean energy technologies is also an important role of the government. Definitions and background Definitions The United Nations Brundtland Commission described the concept of sustainable development, for which energy is a key component, in its 1987 report Our Common Future. It defined sustainable development as meeting "the needs of the present without compromising the ability of future generations to meet their own needs". This description of sustainable development has since been referenced in many definitions and explanations of sustainable energy.No single interpretation of how the concept of sustainability applies to energy has gained worldwide acceptance. Working definitions of sustainable energy encompass multiple dimensions of sustainability such as environmental, economic, and social dimensions. Historically, the concept of sustainable energy development has focused on emissions and on energy security. Since the early 1990s, the concept has broadened to encompass wider social and economic issues.The environmental dimension of sustainability includes greenhouse gas emissions, impacts on biodiversity and ecosystems, hazardous waste and toxic emissions, water consumption, and depletion of non-renewable resources. Energy sources with low environmental impact are sometimes called green energy or clean energy. The economic dimension of sustainability covers economic development, efficient use of energy, and energy security to ensure that each country has constant access to sufficient energy. Social issues include access to affordable and reliable energy for all people, workers' rights, and land rights. Environmental impacts The current energy system contributes to many environmental problems, including climate change, air pollution, biodiversity loss, the release of toxins into the environment, and water scarcity. As of 2019, 85% of the world's energy needs are met by burning fossil fuels. Energy production and consumption are responsible for 76% of annual human-caused greenhouse gas emissions as of 2018. The 2015 international Paris Agreement on climate change aims to limit global warming to well below 2 °C (3.6 °F) and preferably to 1.5 °C (2.7 °F); achieving this goal will require that emissions be reduced as soon as possible and reach net-zero by mid-century.The burning of fossil fuels and biomass is a major source of air pollution, which causes an estimated 7 million deaths each year, with the greatest attributable disease burden seen in low and middle-income countries. Fossil-fuel burning in power plants, vehicles, and factories is the main source of emissions that combine with oxygen in the atmosphere to cause acid rain. Air pollution is the second-leading cause of death from non-infectious disease. An estimated 99% of the world's population lives with levels of air pollution that exceed the World Health Organization recommended limits.Cooking with polluting fuels such as wood, animal dung, coal, or kerosene is responsible for nearly all indoor air pollution, which causes an estimated 1.6 to 3.8 million deaths annually, and also contributes significantly to outdoor air pollution. Health effects are concentrated among women, who are likely to be responsible for cooking, and young children.Environmental impacts extend beyond the by-products of combustion. Oil spills at sea harm marine life and may cause fires which release toxic emissions. Around 10% of global water use goes to energy production, mainly for cooling in thermal energy plants. In dry regions, this contributes to water scarcity. Bioenergy production, coal mining and processing, and oil extraction also require large amounts of water. Excessive harvesting of wood and other combustible material for burning can cause serious local environmental damage, including desertification.In 2021, UNECE published a lifecycle analysis of the environmental impact of numerous electricity generation technologies, accounting for the following: resource use (minerals, metals); land use; resource use (fossils); water use; particulate matter; photochemical ozone formation; ozone depletion; human toxicity (non-cancer); ionising radiation; human toxicity (cancer); eutrophication (terrestrial, marine, freshwater); ecotoxicity (freshwater); acidification; climate change. Sustainable development goals Meeting existing and future energy demands in a sustainable way is a critical challenge for the global goal of limiting climate change while maintaining economic growth and enabling living standards to rise. Reliable and affordable energy, particularly electricity, is essential for health care, education, and economic development. As of 2020, 790 million people in developing countries do not have access to electricity, and around 2.6 billion rely on burning polluting fuels for cooking.Improving energy access in the least-developed countries and making energy cleaner are key to achieving most of the United Nations 2030 Sustainable Development Goals, which cover issues ranging from climate action to gender equality. Sustainable Development Goal 7 calls for "access to affordable, reliable, sustainable and modern energy for all", including universal access to electricity and to clean cooking facilities by 2030. Energy conservation Energy efficiency—using less energy to deliver the same goods or services, or delivering comparable services with less goods—is a cornerstone of many sustainable energy strategies. The International Energy Agency (IEA) has estimated that increasing energy efficiency could achieve 40% of greenhouse gas emission reductions needed to fulfil the Paris Agreement's goals.Energy can be conserved by increasing the technical efficiency of appliances, vehicles, industrial processes, and buildings. Another approach is to use fewer materials whose production requires a lot of energy, for example through better building design and recycling. Behavioural changes such as using videoconferencing rather than business flights, or making urban trips by cycling, walking or public transport rather than by car, are another way to conserve energy. Government policies to improve efficiency can include building codes, performance standards, carbon pricing, and the development of energy-efficient infrastructure to encourage changes in transport modes.The energy intensity of the global economy (the amount of energy consumed per unit of gross domestic product (GDP)) is a rough indicator of the energy efficiency of economic production. In 2010, global energy intensity was 5.6 megajoules (1.6 kWh) per US dollar of GDP. United Nations goals call for energy intensity to decrease by 2.6% each year between 2010 and 2030. In recent years this target has not been met. For instance, between 2017 and 2018, energy intensity decreased by only 1.1%. Efficiency improvements often lead to a rebound effect in which consumers use the money they save to buy more energy-intensive goods and services. For example, recent technical efficiency improvements in transport and buildings have been largely offset by trends in consumer behaviour, such as selecting larger vehicles and homes. Sustainable energy sources Renewable energy sources Renewable energy sources are essential to sustainable energy, as they generally strengthen energy security and emit far fewer greenhouse gases than fossil fuels. Renewable energy projects sometimes raise significant sustainability concerns, such as risks to biodiversity when areas of high ecological value are converted to bioenergy production or wind or solar farms.Hydropower is the largest source of renewable electricity while solar and wind energy are growing rapidly. Photovoltaic solar and onshore wind are the cheapest forms of new power generation capacity in most countries. For more than half of the 770 million people who currently lack access to electricity, decentralised renewable energy such as solar-powered mini-grids is likely the cheapest method of providing it by 2030. United Nations targets for 2030 include substantially increasing the proportion of renewable energy in the world's energy supply. According to the International Energy Agency, renewable energy sources like wind and solar power are now a commonplace source of electricity, making up 70% of all new investments made in the world's power generation. The Agency expects renewables to become the primary energy source for electricity generation globally in the next three years, overtaking coal. Solar The Sun is Earth's primary source of energy, a clean and abundantly available resource in many regions. In 2019, solar power provided around 3% of global electricity, mostly through solar panels based on photovoltaic cells (PV). Solar PV is expected to be the electricity source with the largest installed capacity worldwide by 2027. The panels are mounted on top of buildings or installed in utility-scale solar parks. Costs of solar photovoltaic cells have dropped rapidly, driving strong growth in worldwide capacity. The cost of electricity from new solar farms is competitive with, or in many places, cheaper than electricity from existing coal plants. Various projections of future energy use identify solar PV as one of the main sources of energy generation in a sustainable mix.Most components of solar panels can be easily recycled, but this is not always done in the absence of regulation. Panels typically contain heavy metals, so they pose environmental risks if put in landfills. It takes fewer than two years for a solar panel to produce as much energy as was used for its production. Less energy is needed if materials are recycled rather than mined.In concentrated solar power, solar rays are concentrated by a field of mirrors, heating a fluid. Electricity is produced from the resulting steam with a heat engine. Concentrated solar power can support dispatchable power generation, as some of the heat is typically stored to enable electricity to be generated when needed. In addition to electricity production, solar energy is used more directly; solar thermal heating systems are used for hot water production, heating buildings, drying, and desalination. Wind power Wind has been an important driver of development over millennia, providing mechanical energy for industrial processes, water pumps, and sailing ships. Modern wind turbines are used to generate electricity and provided approximately 6% of global electricity in 2019. Electricity from onshore wind farms is often cheaper than existing coal plants and competitive with natural gas and nuclear. Wind turbines can also be placed offshore, where winds are steadier and stronger than on land but construction and maintenance costs are higher.Onshore wind farms, often built in wild or rural areas, have a visual impact on the landscape. While collisions with wind turbines kill both bats and to a lesser extent birds, these impacts are lower than from other infrastructure such as windows and transmission lines. The noise and flickering light created by the turbines can cause annoyance and constrain construction near densely populated areas. Wind power, in contrast to nuclear and fossil fuel plants, does not consume water. Little energy is needed for wind turbine construction compared to the energy produced by the wind power plant itself. Turbine blades are not fully recyclable, and research into methods of manufacturing easier-to-recycle blades is ongoing. Hydropower Hydroelectric plants convert the energy of moving water into electricity. In 2020, hydropower supplied 17% of the world's electricity, down from a high of nearly 20% in the mid-to-late 20th century.In conventional hydropower, a reservoir is created behind a dam. Conventional hydropower plants provide a highly flexible, dispatchable electricity supply. They can be combined with wind and solar power to meet peaks in demand and to compensate when wind and sun are less available.Compared to reservoir-based facilities, run-of-the-river hydroelectricity generally has less environmental impact. However, its ability to generate power depends on river flow, which can vary with daily and seasonal weather. Reservoirs provide water quantity controls that are used for flood control and flexible electricity output while also providing security during drought for drinking water supply and irrigation.Hydropower ranks among the energy sources with the lowest levels of greenhouse gas emissions per unit of energy produced, but levels of emissions vary enormously between projects. The highest emissions tend to occur with large dams in tropical regions. These emissions are produced when the biological matter that becomes submerged in the reservoir's flooding decomposes and releases carbon dioxide and methane. Deforestation and climate change can reduce energy generation from hydroelectric dams. Depending on location, large dams can displace residents and cause significant local environmental damage; potential dam failure could place the surrounding population at risk. Geothermal Geothermal energy is produced by tapping into deep underground heat and harnessing it to generate electricity or to heat water and buildings. The use of geothermal energy is concentrated in regions where heat extraction is economical: a combination is needed of high temperatures, heat flow, and permeability (the ability of the rock to allow fluids to pass through). Power is produced from the steam created in underground reservoirs. Geothermal energy provided less than 1% of global energy consumption in 2020.Geothermal energy is a renewable resource because thermal energy is constantly replenished from neighbouring hotter regions and the radioactive decay of naturally occurring isotopes. On average, the greenhouse gas emissions of geothermal-based electricity are less than 5% that of coal-based electricity. Geothermal energy carries a risk of inducing earthquakes, needs effective protection to avoid water pollution, and releases toxic emissions which can be captured. Bioenergy Biomass is renewable organic material that comes from plants and animals. It can either be burned to produce heat and electricity or be converted into biofuels such as biodiesel and ethanol, which can be used to power vehicles.The climate impact of bioenergy varies considerably depending on where biomass feedstocks come from and how they are grown. For example, burning wood for energy releases carbon dioxide; those emissions can be significantly offset if the trees that were harvested are replaced by new trees in a well-managed forest, as the new trees will absorb carbon dioxide from the air as they grow. However, the establishment and cultivation of bioenergy crops can displace natural ecosystems, degrade soils, and consume water resources and synthetic fertilisers. Approximately one-third of all wood used for traditional heating and cooking in tropical areas is harvested unsustainably. Bioenergy feedstocks typically require significant amounts of energy to harvest, dry, and transport; the energy usage for these processes may emit greenhouse gases. In some cases, the impacts of land-use change, cultivation, and processing can result in higher overall carbon emissions for bioenergy compared to using fossil fuels.Use of farmland for growing biomass can result in less land being available for growing food. In the United States, around 10% of motor gasoline has been replaced by corn-based ethanol, which requires a significant proportion of the harvest. In Malaysia and Indonesia, clearing forests to produce palm oil for biodiesel has led to serious social and environmental effects, as these forests are critical carbon sinks and habitats for diverse species. Since photosynthesis captures only a small fraction of the energy in sunlight, producing a given amount of bioenergy requires a large amount of land compared to other renewable energy sources.Second-generation biofuels which are produced from non-food plants or waste reduce competition with food production, but may have other negative effects including trade-offs with conservation areas and local air pollution. Relatively sustainable sources of biomass include algae, waste, and crops grown on soil unsuitable for food production.Carbon capture and storage technology can be used to capture emissions from bioenergy power plants. This process is known as bioenergy with carbon capture and storage (BECCS) and can result in net carbon dioxide removal from the atmosphere. However, BECCS can also result in net positive emissions depending on how the biomass material is grown, harvested, and transported. Deployment of BECCS at scales described in some climate change mitigation pathways would require converting large amounts of cropland. Marine energy Marine energy has the smallest share of the energy market. It includes tidal power, which is approaching maturity, and wave power, which is earlier in its development. Two tidal barrage systems in France and in South Korea make up 90% of global production. While single marine energy devices pose little risk to the environment, the impacts of larger devices are less well known. Non-renewable energy sources Fossil fuel switching and mitigation Switching from coal to natural gas has advantages in terms of sustainability. For a given unit of energy produced, the life-cycle greenhouse-gas emissions of natural gas are around 40 times the emissions of wind or nuclear energy but are much less than coal. Burning natural gas produces around half the emissions of coal when used to generate electricity and around two-thirds the emissions of coal when used to produce heat. Natural gas combustion also produces less air pollution than coal. However, natural gas is a potent greenhouse gas in itself, and leaks during extraction and transportation can negate the advantages of switching away from coal. The technology to curb methane leaks is widely available but it is not always used.Switching from coal to natural gas reduces emissions in the short term and thus contributes to climate change mitigation. However, in the long term it does not provide a path to net-zero emissions. Developing natural gas infrastructure risks carbon lock-in and stranded assets, where new fossil infrastructure either commits to decades of carbon emissions, or has to be written off before it makes a profit.The greenhouse gas emissions of fossil fuel and biomass power plants can be significantly reduced through carbon capture and storage (CCS). Most studies use a working assumption that CCS can capture 85–90% of the carbon dioxide (CO2) emissions from a power plant. Even if 90% of emitted CO2 is captured from a coal-fired power plant, its uncaptured emissions would still be many times greater than the emissions of nuclear, solar or wind energy per unit of electricity produced. Since coal plants using CCS would be less efficient, they would require more coal and thus increase the pollution associated with mining and transporting coal. The CCS process is expensive, with costs depending considerably on the location's proximity to suitable geology for carbon dioxide storage. Deployment of this technology is still very limited, with only 21 large-scale CCS plants in operation worldwide as of 2020. Nuclear power Nuclear power has been used since the 1950s as a low-carbon source of baseload electricity. Nuclear power plants in over 30 countries generate about 10% of global electricity. As of 2019, nuclear generated over a quarter of all low-carbon energy, making it the second largest source after hydropower.Nuclear power's lifecycle greenhouse gas emissions—including the mining and processing of uranium—are similar to the emissions from renewable energy sources. Nuclear power uses little land per unit of energy produced, compared to the major renewables. Reason magazine reported in May 2023 that "...biomass, wind, and solar power are set to occupy an area equivalent of the size of the European Union by 2050." Additionally, Nuclear power does not create local air pollution. Although the uranium ore used to fuel nuclear fission plants is a non-renewable resource, enough exists to provide a supply for hundreds to thousands of years. However, uranium resources that can be accessed in an economically feasible manner, at the present state, are limited and uranium production could hardly keep up during the expansion phase. Climate change mitigation pathways consistent with ambitious goals typically see an increase in power supply from nuclear.There is controversy over whether nuclear power is sustainable, in part due to concerns around nuclear waste, nuclear weapon proliferation, and accidents. Radioactive nuclear waste must be managed for thousands of years and nuclear power plants create fissile material that can be used for weapons. For each unit of energy produced, nuclear energy has caused far fewer accidental and pollution-related deaths than fossil fuels, and the historic fatality rate of nuclear is comparable to renewable sources. Public opposition to nuclear energy often makes nuclear plants politically difficult to implement.Reducing the time and the cost of building new nuclear plants have been goals for decades but costs remain high and timescales long. Various new forms of nuclear energy are in development, hoping to address the drawbacks of conventional plants. Fast breeder reactors are capable of recycling nuclear waste and therefore can significantly reduce the amount of waste that requires geological disposal, but have not yet been deployed on a large-scale commercial basis. Nuclear power based on thorium (rather than uranium) may be able to provide higher energy security for countries that do not have a large supply of uranium. Small modular reactors may have several advantages over current large reactors: It should be possible to build them faster and their modularization would allow for cost reductions via learning-by-doing.Several countries are attempting to develop nuclear fusion reactors, which would generate small amounts of waste and no risk of explosions. Although fusion power has taken steps forward in the lab, the multi-decade timescale needed to bring it to commercialization and then scale means it will not contribute to a 2050 net zero goal for climate change mitigation. Energy system transformation The emissions reductions necessary to keep global warming below 2 °C will require a system-wide transformation of the way energy is produced, distributed, stored, and consumed. For a society to replace one form of energy with another, multiple technologies and behaviours in the energy system must change. For example, transitioning from oil to solar power as the energy source for cars requires the generation of solar electricity, modifications to the electrical grid to accommodate fluctuations in solar panel output or the introduction of variable battery chargers and higher overall demand, adoption of electric cars, and networks of electric vehicle charging facilities and repair shops.Many climate change mitigation pathways envision three main aspects of a low-carbon energy system: The use of low-emission energy sources to produce electricity Electrification – that is increased use of electricity instead of directly burning fossil fuels Accelerated adoption of energy efficiency measuresSome energy-intensive technologies and processes are difficult to electrify, including aviation, shipping, and steelmaking. There are several options for reducing the emissions from these sectors: biofuels and synthetic carbon-neutral fuels can power many vehicles that are designed to burn fossil fuels, however biofuels cannot be sustainably produced in the quantities needed and synthetic fuels are currently very expensive. For some applications, the most prominent alternative to electrification is to develop a system based on sustainably-produced hydrogen fuel.Full decarbonisation of the global energy system is expected to take several decades and can mostly be achieved with existing technologies. The IEA states that further innovation in the energy sector, such as in battery technologies and carbon-neutral fuels, is needed to reach net-zero emissions by 2050. Developing new technologies requires research and development, demonstration, and cost reductions via deployment. The transition to a zero-carbon energy system will bring strong co-benefits for human health: The World Health Organization estimates that efforts to limit global warming to 1.5 °C could save millions of lives each year from reductions to air pollution alone. With good planning and management, pathways exist to provide universal access to electricity and clean cooking by 2030 in ways that are consistent with climate goals. Historically, several countries have made rapid economic gains through coal usage. However, there remains a window of opportunity for many poor countries and regions to "leapfrog" fossil fuel dependency by developing their energy systems based on renewables, given adequate international investment and knowledge transfer. Integrating variable energy sources To deliver reliable electricity from variable renewable energy sources such as wind and solar, electrical power systems require flexibility. Most electrical grids were constructed for non-intermittent energy sources such as coal-fired power plants. As larger amounts of solar and wind energy are integrated into the grid, changes have to be made to the energy system to ensure that the supply of electricity is matched to demand. In 2019, these sources generated 8.5% of worldwide electricity, a share that has grown rapidly.There are various ways to make the electricity system more flexible. In many places, wind and solar generation are complementary on a daily and a seasonal scale: there is more wind during the night and in winter when solar energy production is low. Linking different geographical regions through long-distance transmission lines allows for further cancelling out of variability. Energy demand can be shifted in time through energy demand management and the use of smart grids, matching the times when variable energy production is highest. With grid energy storage, energy produced in excess can be released when needed. Further flexibility could be provided from sector coupling, that is coupling the electricity sector to the heat and mobility sector via power-to-heat-systems and electric vehicles.Building overcapacity for wind and solar generation can help ensure that enough electricity is produced even during poor weather. In optimal weather, energy generation may have to be curtailed if excess electricity cannot be used or stored. The final demand-supply mismatch may be covered by using dispatchable energy sources such as hydropower, bioenergy, or natural gas. Energy storage Energy storage helps overcome barriers to intermittent renewable energy and is an important aspect of a sustainable energy system. The most commonly used and available storage method is pumped-storage hydroelectricity, which requires locations with large differences in height and access to water. Batteries, especially lithium-ion batteries, are also deployed widely. Batteries typically store electricity for short periods; research is ongoing into technology with sufficient capacity to last through seasons. Costs of utility-scale batteries in the US have fallen by around 70% since 2015, however the cost and low energy density of batteries makes them impractical for the very large energy storage needed to balance inter-seasonal variations in energy production. Pumped hydro storage and power-to-gas (converting electricity to gas and back) with capacity for multi-month usage has been implemented in some locations. Electrification Compared to the rest of the energy system, emissions can be reduced much faster in the electricity sector. As of 2019, 37% of global electricity is produced from low-carbon sources (renewables and nuclear energy). Fossil fuels, primarily coal, produce the rest of the electricity supply. One of the easiest and fastest ways to reduce greenhouse gas emissions is to phase out coal-fired power plants and increase renewable electricity generation.Climate change mitigation pathways envision extensive electrification—the use of electricity as a substitute for the direct burning of fossil fuels for heating buildings and for transport. Ambitious climate policy would see a doubling of energy share consumed as electricity by 2050, from 20% in 2020.One of the challenges in providing universal access to electricity is distributing power to rural areas. Off-grid and mini-grid systems based on renewable energy, such as small solar PV installations that generate and store enough electricity for a village, are important solutions. Wider access to reliable electricity would lead to less use of kerosene lighting and diesel generators, which are currently common in the developing world.Infrastructure for generating and storing renewable electricity requires minerals and metals, such as cobalt and lithium for batteries and copper for solar panels. Recycling can meet some of this demand if product lifecycles are well-designed, however achieving net zero emissions would still require major increases in mining for 17 types of metals and minerals. A small group of countries or companies sometimes dominate the markets for these commodities, raising geopolitical concerns. Most of the world's cobalt, for instance, is mined in the Democratic Republic of the Congo, a politically unstable region where mining is often associated with human rights risks. More diverse geographical sourcing may ensure a more flexible and less brittle supply chain. Hydrogen Hydrogen gas is widely discussed in the context of energy, as an energy carrier with potential to reduce greenhouse gas emissions. This requires hydrogen to be produced cleanly, in quantities to supply in sectors and applications where cheaper and more energy efficient mitigation alternatives are limited. These applications include heavy industry and long-distance transport.Hydrogen can be deployed as an energy source in fuel cells to produce electricity, or via combustion to generate heat. When hydrogen is consumed in fuel cells, the only emission at the point of use is water vapour. Combustion of hydrogen can lead to the thermal formation of harmful nitrogen oxides. The overall lifecycle emissions of hydrogen depend on how it is produced. Nearly all of the world's current supply of hydrogen is created from fossil fuels. The main method is steam methane reforming, in which hydrogen is produced from a chemical reaction between steam and methane, the main component of natural gas. Producing one tonne of hydrogen through this process emits 6.6–9.3 tonnes of carbon dioxide. While carbon capture and storage (CCS) could remove a large fraction of these emissions, the overall carbon footprint of hydrogen from natural gas is difficult to assess as of 2021, in part because of emissions (including vented and fugitive methane) created in the production of the natural gas itself.Electricity can be used to split water molecules, producing sustainable hydrogen provided the electricity was generated sustainably. However, this electrolysis process is currently financially more expensive than creating hydrogen from methane without CCS and the efficiency of energy conversion is inherently low. Hydrogen can be produced when there is a surplus of variable renewable electricity, then stored and used to generate heat or to re-generate electricity. It can be further transformed into liquid fuels such as green ammonia and green methanol. Innovation in hydrogen electrolysers could make large-scale production of hydrogen from electricity more cost-competitive.Hydrogen fuel can produce the intense heat required for industrial production of steel, cement, glass, and chemicals, thus contributing to the decarbonisation of industry alongside other technologies, such as electric arc furnaces for steelmaking. For steelmaking, hydrogen can function as a clean energy carrier and simultaneously as a low-carbon catalyst replacing coal-derived coke. Hydrogen used to decarbonise transportation is likely to find its largest applications in shipping, aviation and to a lesser extent heavy goods vehicles. For light duty vehicles including passenger cars, hydrogen is far behind other alternative fuel vehicles, especially compared with the rate of adoption of battery electric vehicles, and may not play a significant role in future.Disadvantages of hydrogen as an energy carrier include high costs of storage and distribution due to hydrogen's explosivity, its large volume compared to other fuels, and its tendency to make pipes brittle. Energy usage technologies Transport Transport accounts for 14% of global greenhouse gas emissions, but there are multiple ways to make transport more sustainable. Public transport typically emits fewer greenhouse gases per passenger than personal vehicles, since trains and buses can carry many more passengers at once. Short-distance flights can be replaced by high-speed rail, which is more efficient, especially when electrified. Promoting non-motorised transport such as walking and cycling, particularly in cities, can make transport cleaner and healthier.The energy efficiency of cars has increased over time, but shifting to electric vehicles is an important further step towards decarbonising transport and reducing air pollution. A large proportion of traffic-related air pollution consists of particulate matter from road dust and the wearing-down of tyres and brake pads. Substantially reducing pollution from these non-tailpipe sources cannot be achieved by electrification; it requires measures such as making vehicles lighter and driving them less. Light-duty cars in particular are a prime candidate for decarbonization using battery technology. 25% of the world's CO2 emissions still originate from the transportation sector.Long-distance freight transport and aviation are difficult sectors to electrify with current technologies, mostly because of the weight of batteries needed for long-distance travel, battery recharging times, and limited battery lifespans. Where available, freight transport by ship and rail is generally more sustainable than by air and by road. Hydrogen vehicles may be an option for larger vehicles such as lorries. Many of the techniques needed to lower emissions from shipping and aviation are still early in their development, with ammonia (produced from hydrogen) a promising candidate for shipping fuel. Aviation biofuel may be one of the better uses of bioenergy if emissions are captured and stored during manufacture of the fuel. Buildings and cooking Over one-third of energy use is in buildings and their construction. To heat buildings, alternatives to burning fossil fuels and biomass include electrification through heat pumps or electric heaters, geothermal energy, central solar heating, reuse of waste heat, and seasonal thermal energy storage. Heat pumps provide both heat and air conditioning through a single appliance. The IEA estimates heat pumps could provide over 90% of space and water heating requirements globally.A highly efficient way to heat buildings is through district heating, in which heat is generated in a centralised location and then distributed to multiple buildings through insulated pipes. Traditionally, most district heating systems have used fossil fuels, but modern and cold district heating systems are designed to use high shares of renewable energy.Cooling of buildings can be made more efficient through passive building design, planning that minimises the urban heat island effect, and district cooling systems that cool multiple buildings with piped cold water. Air conditioning requires large amounts of electricity and is not always affordable for poorer households. Some air conditioning units still use refrigerants that are greenhouse gases, as some countries have not ratified the Kigali Amendment to only use climate-friendly refrigerants.In developing countries where populations suffer from energy poverty, polluting fuels such as wood or animal dung are often used for cooking. Cooking with these fuels is generally unsustainable, because they release harmful smoke and because harvesting wood can lead to forest degradation. The universal adoption of clean cooking facilities, which are already ubiquitous in rich countries, would dramatically improve health and have minimal negative effects on climate. Clean cooking facilities, e.g. cooking facilities that produce less indoor soot, typically use natural gas, liquefied petroleum gas (both of which consume oxygen and produce carbon-dioxide) or electricity as the energy source; biogas systems are a promising alternative in some contexts. Improved cookstoves that burn biomass more efficiently than traditional stoves are an interim solution where transitioning to clean cooking systems is difficult. Industry Over one-third of energy use is by industry. Most of that energy is deployed in thermal processes: generating heat, drying, and refrigeration. The share of renewable energy in industry was 14.5% in 2017—mostly low-temperature heat supplied by bioenergy and electricity. The most energy-intensive activities in industry have the lowest shares of renewable energy, as they face limitations in generating heat at temperatures over 200 °C (390 °F).For some industrial processes, commercialisation of technologies that have not yet been built or operated at full scale will be needed to eliminate greenhouse gas emissions. Steelmaking, for instance, is difficult to electrify because it traditionally uses coke, which is derived from coal, both to create very high-temperature heat and as an ingredient in the steel itself. The production of plastic, cement, and fertilisers also requires significant amounts of energy, with limited possibilities available to decarbonise. A switch to a circular economy would make industry more sustainable as it involves recycling more and thereby using less energy compared to investing energy to mine and refine new raw materials. Government policies Well-designed government policies that promote energy system transformation can lower greenhouse gas emissions and improve air quality simultaneously, and in many cases can also increase energy security and lessen the financial burden of using energy.Environmental regulations have been used since the 1970s to promote more sustainable use of energy. Some governments have committed to dates for phasing out coal-fired power plants and ending new fossil fuel exploration. Governments can require that new cars produce zero emissions, or new buildings are heated by electricity instead of gas. Renewable portfolio standards in several countries require utilities to increase the percentage of electricity they generate from renewable sources.Governments can accelerate energy system transformation by leading the development of infrastructure such as long-distance electrical transmission lines, smart grids, and hydrogen pipelines. In transport, appropriate infrastructure and incentives can make travel more efficient and less car-dependent. Urban planning that discourages sprawl can reduce energy use in local transport and buildings while enhancing quality of life. Government-funded research, procurement, and incentive policies have historically been critical to the development and maturation of clean energy technologies, such as solar and lithium batteries. In the IEA's scenario for a net zero-emission energy system by 2050, public funding is rapidly mobilised to bring a range of newer technologies to the demonstration phase and to encourage deployment. Carbon pricing (such as a tax on CO2 emissions) gives industries and consumers an incentive to reduce emissions while letting them choose how to do so. For example, they can shift to low-emission energy sources, improve energy efficiency, or reduce their use of energy-intensive products and services. Carbon pricing has encountered strong political pushback in some jurisdictions, whereas energy-specific policies tend to be politically safer. Most studies indicate that to limit global warming to 1.5 °C, carbon pricing would need to be complemented by stringent energy-specific policies. As of 2019, the price of carbon in most regions is too low to achieve the goals of the Paris Agreement. Carbon taxes provide a source of revenue that can be used to lower other taxes or help lower-income households afford higher energy costs. Some governments, such as the EU and the UK, are exploring the use of carbon border adjustments. These place tariffs on imports from countries with less stringent climate policies, to ensure that industries subject to internal carbon prices remain competitive.The scale and pace of policy reforms that have been initiated as of 2020 are far less than needed to fulfil the climate goals of the Paris Agreement. In addition to domestic policies, greater international cooperation is required to accelerate innovation and to assist poorer countries in establishing a sustainable path to full energy access.Countries may support renewables to create jobs. The International Labour Organization estimates that efforts to limit global warming to 2 °C would result in net job creation in most sectors of the economy. It predicts that 24 million new jobs would be created by 2030 in areas such as renewable electricity generation, improving energy-efficiency in buildings, and the transition to electric vehicles. Six million jobs would be lost, in sectors such as mining and fossil fuels. Governments can make the transition to sustainable energy more politically and socially feasible by ensuring a just transition for workers and regions that depend on the fossil fuel industry, to ensure they have alternative economic opportunities. Finance Raising enough money for innovation and investment is a prerequisite for the energy transition. The IPCC estimates that to limit global warming to 1.5 °C, US$2.4 trillion would need to be invested in the energy system each year between 2016 and 2035. Most studies project that these costs, equivalent to 2.5% of world GDP, would be small compared to the economic and health benefits. Average annual investment in low-carbon energy technologies and energy efficiency would need to be six times more by 2050 compared to 2015. Underfunding is particularly acute in the least developed countries, which are not attractive to the private sector.The United Nations Framework Convention on Climate Change estimates that climate financing totalled $681 billion in 2016. Most of this is private-sector investment in renewable energy deployment, public-sector investment in sustainable transport, and private-sector investment in energy efficiency. The Paris Agreement includes a pledge of an extra $100 billion per year from developed countries to poor countries, to do climate change mitigation and adaptation. However, this goal has not been met and measurement of progress has been hampered by unclear accounting rules. If energy-intensive businesses like chemicals, fertilizers, ceramics, steel, and non-ferrous metals invest significantly in R&D, its usage in industry might amount to between 5% and 20% of all energy used.Fossil fuel funding and subsidies are a significant barrier to the energy transition. Direct global fossil fuel subsidies were $319 billion in 2017. This rises to $5.2 trillion when indirect costs are priced in, like the effects of air pollution. Ending these could lead to a 28% reduction in global carbon emissions and a 46% reduction in air pollution deaths. Funding for clean energy has been largely unaffected by the COVID-19 pandemic, and pandemic-related economic stimulus packages offer possibilities for a green recovery. References Sources External links Renewable energy portal Energy portal Wind power portal
oil well
An oil well is a drillhole boring in Earth that is designed to bring petroleum oil hydrocarbons to the surface. Usually some natural gas is released as associated petroleum gas along with the oil. A well that is designed to produce only gas may be termed a gas well. Wells are created by drilling down into an oil or gas reserve that is then mounted with an extraction device such as a pumpjack which allows extraction from the reserve. Creating the wells can be an expensive process, costing at least hundreds of thousands of dollars, and costing much more when in hard to reach areas, e.g., when creating offshore oil platforms. The process of modern drilling for wells first started in the 19th century, but was made more efficient with advances to oil drilling rigs during the 20th century. Wells are frequently sold or exchanged between different oil and gas companies as an asset – in large part because during falls in price of oil and gas, a well may be unproductive, but if prices rise, even low production wells may be economically valuable. Moreover, new methods, such as hydraulic fracturing (a process of injecting gas or liquid to force more oil or natural gas production) have made some wells viable. However, peak oil and climate policy surrounding fossil fuels has made fewer of these wells and costly techniques viable. However, the large number of neglected or poorly maintained wellheads is a large environmental issue: they may leak methane emissions or other toxic emissions into local air, water or soil systems. This pollution often becomes worse when wells are abandoned or orphaned – where wells no longer are economically viable, and no longer are maintained by a company. A 2020 estimate by Reuters suggested that there were at least 29 million abandoned wells internationally, creating a significant source of greenhouse gas emissions causing climate change. History The earliest known oil wells were drilled in China in 347 CE. These wells had depths of up to about 240 metres (790 ft) and were drilled using bits attached to bamboo poles. The oil was burned to evaporate brine and produce salt. By the 10th century, extensive bamboo pipelines connected oil wells with salt springs. The ancient records of China and Japan are said to contain many allusions to the use of natural gas for lighting and heating. Petroleum was known as burning water in Japan in the 7th century.According to Kasem Ajram, petroleum was distilled by the Persian alchemist Muhammad ibn Zakarīya Rāzi (Rhazes) in the 9th century, producing chemicals such as kerosene in the alembic (al-ambiq), and which was mainly used for kerosene lamps. Arab and Persian chemists also distilled crude oil in order to produce flammable products for military purposes. Through Islamic Spain, distillation became available in Western Europe by the 12th century.Some sources claim that from the 9th century, oil fields were exploited in the area around modern Baku, Azerbaijan, to produce naphtha for the petroleum industry. These places were described by Marco Polo in the 13th century, who described the output of those oil wells as hundreds of shiploads. When Marco Polo in 1264 visited Baku, on the shores of the Caspian Sea, he saw oil being collected from seeps. He wrote that "on the confines toward Geirgine there is a fountain from which oil springs in great abundance, in as much as a hundred shiploads might be taken from it at one time." In 1846, Baku (settlement Bibi-Heybat) the first ever well was drilled with percussion tools to a depth of 21 metres (69 ft) for oil exploration. In 1846–1848, the first modern oil wells were drilled on the Absheron Peninsula north-east of Baku, by Russian engineer Vasily Semyonov considering the ideas of Nikolay Voskoboynikov.Ignacy Łukasiewicz, a Polish pharmacist and petroleum industry pioneer built one of the world's first modern oil wells in 1854 in Polish village Bóbrka, Krosno County who in 1856 built one of the world's first oil refineries.In North America, the first commercial oil well entered operation in Oil Springs, Ontario in 1858, while the first offshore oil well was drilled in 1896 at the Summerland Oil Field on the California Coast.The earliest oil wells in modern times were drilled percussively, by repeatedly raising and dropping a cable tool into the earth. In the 20th century, cable tools were largely replaced with rotary drilling, which could drill boreholes to much greater depths and in less time. The record-depth Kola Borehole used a mud motor while drilling to achieve a depth of over 12,000 metres (12 km; 39,000 ft; 7.5 mi).Until the 1970s, most oil wells were vertical, although lithological and mechanical imperfections cause most wells to deviate at least slightly from true vertical (see deviation survey). However, modern directional drilling technologies allow for strongly deviated wells which can, given sufficient depth and with the proper tools, actually become horizontal. This is of great value as the reservoir rocks which contain hydrocarbons are usually horizontal or nearly horizontal; a horizontal wellbore placed in a production zone has more surface area in the production zone than a vertical well, resulting in a higher production rate. The use of deviated and horizontal drilling has also made it possible to reach reservoirs several kilometers or miles away from the drilling location (extended reach drilling), allowing for the production of hydrocarbons located below locations that are either difficult to place a drilling rig on, environmentally sensitive, or populated. Life of a well Planning Before a well is drilled, a geologic target is identified by a geologist or geophysicist to meet the objectives of the well. For a production well, the target is picked to optimize production from the well and manage reservoir drainage. For an exploration or appraisal well, the target is chosen to confirm the existence of a viable hydrocarbon reservoir or to learn its extent. For an injection well, the target is selected to locate the point of injection in a permeable zone, which may support disposing of water or gas and /or pushing hydrocarbons into nearby production wells.The target (the end point of the well) will be matched with a surface location (the starting point of the well), and a trajectory between the two will be designed. There are many considerations to take into account when designing the trajectory such as the clearance to any nearby wells (anti-collision) or if this well will get in the way of future wells, trying to avoid faults if possible and certain formations may be easier/more difficult to drill at certain inclinations or azimuths. When the well path is identified, a team of geoscientists and engineers will develop a set of presumed properties of the subsurface that will be drilled through to reach the target. These properties include pore pressure, fracture gradient, wellbore stability, porosity, permeability, lithology, faults, and clay content. This set of assumptions is used by a well engineering team to perform the casing design and completion design for the well, and then detailed planning, where, for example, the drill bits are selected, a BHA is designed, the drilling fluid is selected, and step-by-step procedures are written to provide instruction for executing the well in a safe and cost-efficient manner. With the interplay with many of the elements in a well design and making a change to one will have a knock on effect on many other things, often trajectories and designs go through several iterations before a plan is finalised. Drilling The well is created by drilling a hole 12 cm to 1 meter (5 in to 40 in) in diameter into the earth with a drilling rig that rotates a drill string with a bit attached. After the hole is drilled, sections of steel pipe (casing), slightly smaller in diameter than the borehole, are placed in the hole. Cement may be placed between the outside of the casing and the borehole known as the annulus. The casing provides structural integrity to the newly drilled wellbore, in addition to isolating potentially dangerous high pressure zones from each other and from the surface. With these zones safely isolated and the formation protected by the casing, the well can be drilled deeper (into potentially more-unstable and violent formations) with a smaller bit, and also cased with a smaller size casing. Modern wells often have two to five sets of subsequently smaller hole sizes drilled inside one another, each cemented with casing. To drill the wellThe drill bit, aided by the weight of the drill string above it, cuts into the rock. There are different types of drill bit; some cause the rock to disintegrate by compressive failure, while others shear slices off the rock as the bit turns. Drilling fluid, a.k.a. "mud", is pumped down the inside of the drill pipe and exits at the drill bit. The principal components of drilling fluid are usually water and clay, but it also typically contains a complex mixture of fluids, solids and chemicals that must be carefully tailored to provide the correct physical and chemical characteristics required to safely drill the well. Particular functions of the drilling mud include cooling the bit, lifting rock cuttings to the surface, preventing destabilisation of the rock in the wellbore walls and overcoming the pressure of fluids inside the rock so that these fluids do not enter the wellbore. Some oil wells are drilled with air or foam as the drilling fluid.The generated rock "cuttings" are swept up by the drilling fluid as it circulates back to surface outside the drill pipe. The fluid then goes through "shakers" which strain the cuttings from the good fluid which is returned to the pit. Watching for abnormalities in the returning cuttings and monitoring pit volume or rate of returning fluid are imperative to catch "kicks" early. A "kick" is when the formation pressure at the depth of the bit is more than the hydrostatic head of the mud above, which if not controlled temporarily by closing the blowout preventers and ultimately by increasing the density of the drilling fluid would allow formation fluids and mud to come up through the annulus uncontrollably. The pipe or drill string to which the bit is attached is gradually lengthened as the well gets deeper by screwing in additional 9 m (30 ft) sections or "joints" of pipe under the kelly or topdrive at the surface. This process is called making a connection. The process called "tripping" is when pulling the bit out of hole to replace the bit (tripping out), and running back in with a new bit (tripping in). Joints can be combined for more efficient tripping when pulling out of the hole by creating stands of multiple joints. A conventional triple, for example, would pull pipe out of the hole three joints at a time and stack them in the derrick. Many modern rigs, called "super singles", trip pipe one at a time, laying it out on racks as they go.This process is all facilitated by a drilling rig which contains all necessary equipment to circulate the drilling fluid, hoist and turn the pipe, control downhole, remove cuttings from the drilling fluid, and generate on-site power for these operations. Completion After drilling and casing the well, it must be 'completed'. Completion is the process in which the well is enabled to produce oil or gas. In a cased-hole completion, small holes called perforations are made in the portion of the casing which passed through the production zone, to provide a path for the oil to flow from the surrounding rock into the production tubing. In open hole completion, often 'sand screens' or a 'gravel pack' is installed in the last drilled, uncased reservoir section. These maintain structural integrity of the wellbore in the absence of casing, while still allowing flow from the reservoir into the wellbore. Screens also control the migration of formation sands into production tubulars and surface equipment, which can cause washouts and other problems, particularly from unconsolidated sand formations of offshore fields. After a flow path is made, acids and fracturing fluids may be pumped into the well to fracture, clean, or otherwise prepare and stimulate the reservoir rock to optimally produce hydrocarbons into the wellbore. Finally, the area above the reservoir section of the well is packed off inside the casing, and connected to the surface via a smaller diameter pipe called tubing. This arrangement provides a redundant barrier to leaks of hydrocarbons as well as allowing damaged sections to be replaced. Also, the smaller cross-sectional area of the tubing produces reservoir fluids at an increased velocity in order to minimize liquid fallback that would create additional back pressure, and shields the casing from corrosive well fluids. In many wells, the natural pressure of the subsurface reservoir is high enough for the oil or gas to flow to the surface. However, this is not always the case, especially in depleted fields where the pressures have been lowered by other producing wells, or in low permeability oil reservoirs. Installing a smaller diameter tubing may be enough to help the production, but artificial lift methods may also be needed. Common solutions include downhole pumps, gas lift, or surface pump jacks. Many new systems in the last ten years have been introduced for well completion. Multiple packer systems with frac ports or port collars in an all in one system have cut completion costs and improved production, especially in the case of horizontal wells. These new systems allow casings to run into the lateral zone with proper packer/frac port placement for optimal hydrocarbon recovery. Production The production stage is the most important stage of a well's life; when the oil and gas are produced. By this time, the oil rigs and workover rigs used to drill and complete the well have moved off the wellbore, and the top is usually outfitted with a collection of valves called a Christmas tree or production tree. These valves regulate pressures, control flows, and allow access to the wellbore in case further completion work is needed. From the outlet valve of the production tree, the flow can be connected to a distribution network of pipelines and tanks to supply the product to refineries, natural gas compressor stations, or oil export terminals. As long as the pressure in the reservoir remains high enough, the production tree is all that is required to produce the well. If the pressure depletes and it is considered economically viable, an artificial lift method mentioned in the completions section can be employed. Workovers are often necessary in older wells, which may need smaller diameter tubing, scale or paraffin removal, acid matrix jobs, or completing new zones of interest in a shallower reservoir. Such remedial work can be performed using workover rigs – also known as pulling units, completion rigs or "service rigs" – to pull and replace tubing, or by the use of well intervention techniques utilizing coiled tubing. Depending on the type of lift system and wellhead a rod rig or flushby can be used to change a pump without pulling the tubing. Enhanced recovery methods such as water flooding, steam flooding, or CO2 flooding may be used to increase reservoir pressure and provide a "sweep" effect to push hydrocarbons out of the reservoir. Such methods require the use of injection wells (often chosen from old production wells in a carefully determined pattern), and are used when facing problems with reservoir pressure depletion, high oil viscosity, or can even be employed early in a field's life. In certain cases – depending on the reservoir's geomechanics – reservoir engineers may determine that ultimate recoverable oil may be increased by applying a waterflooding strategy early in the field's development rather than later. Such enhanced recovery techniques are often called "tertiary recovery". Abandonment Types of wells By produced fluid Wells that produce oil Wells that produce oil and natural gas, or Wells that only produce natural gas.Natural gas, in a raw form known as associated petroleum gas, is almost always a by-product of producing oil. The small, light gas carbon chains come out of solution as they undergo pressure reduction from the reservoir to the surface, similar to uncapping a bottle of soda where the carbon dioxide effervesces. If it escapes into the atmosphere intentionally it is known as vented gas, or if unintentionally as fugitive gas. Unwanted natural gas can be a disposal problem at wells that are developed to produce oil. If there are no pipelines for natural gas near the wellhead it may be of no value to the oil well owner since it cannot reach the consumer markets. Such unwanted gas may then be burned off at the well site in a practice known as production flaring, but due to the energy resource waste and environmental damage concerns this practice is becoming less common.Often, unwanted (or 'stranded' gas without a market) gas is pumped back into the reservoir with an 'injection' well for storage or for re-pressurizing the producing formation. Another solution is to convert the natural gas to a liquid fuel. Gas to liquid (GTL) is a developing technology that converts stranded natural gas into synthetic gasoline, diesel or jet fuel through the Fischer–Tropsch process developed in World War II Germany. Like oil, such dense liquid fuels can be transported using conventional tankers or trucking to users. Proponents claim GTL fuels burn cleaner than comparable petroleum fuels. Most major international oil companies are in advanced development stages of GTL production, e.g. the 140,000 bbl/d (22,000 m3/d) Pearl GTL plant in Qatar, scheduled to come online in 2011. In locations such as the United States with a high natural gas demand, pipelines are usually favored to take the gas from the well site to the end consumer. By location Wells can be located: On land, or OffshoreOffshore wells can further be subdivided into Wells with subsea wellheads, where the top of the well is sitting on the ocean floor under water, and often connected to a pipeline on the ocean floor. Wells with 'dry' wellheads, where the top of the well is above the water on a platform or jacket, which also often contains processing equipment for the produced fluid.While the location of the well will be a large factor in the type of equipment used to drill it, there is actually little difference in the well itself. An offshore well targets a reservoir that happens to be underneath an ocean. Due to logistics, drilling an offshore well is far more costly than an onshore well. By far the most common type is the onshore well. These wells dot the Southern and Central Great Plains, Southwestern United States, and are the most common wells in the Middle East. By purpose Another way to classify oil wells is by their purpose in contributing to the development of a resource. They can be characterized as: wildcat wells are drilled where little or no known geological information is available. The site may have been selected because of wells drilled some distance from the proposed location but on a terrain that appeared similar to the proposed site. Individuals who drill wildcat wells are known as 'wildcatters'. exploration wells are drilled purely for exploratory (information gathering) purposes in a new area, the site selection is usually based on seismic data, satellite surveys etc. Details gathered in this well includes the presence of hydrocarbon in the drilled location, the amount of fluid present and the depth at which oil or/and gas occurs. appraisal wells are used to assess characteristics (such as flow rate, reserve quantity) of a proven hydrocarbon accumulation. The purpose of this well is to reduce uncertainty about the characteristics and properties of the hydrocarbon present in the field. production wells are drilled primarily for producing oil or gas, once the producing structure and characteristics are determined. development wells are wells drilled for the production of oil or gas already proven by appraisal drilling to be suitable for exploitation. abandoned wells are wells permanently plugged in the drilling phase for technical reasons.At a producing well site, active wells may be further categorised as: oil producers producing predominantly liquid hydrocarbons, but most include some associated gas. gas producers producing almost entirely gaseous hydrocarbons, consisting mostly of natural gas. water injectors injecting water into the formation to maintain reservoir pressure, or simply to dispose of water produced with the hydrocarbons because even after treatment, it would be too oily and too saline to be considered clean for dumping overboard offshore, let alone into a fresh water resource in the case of onshore wells. Water injection into the producing zone frequently has an element of reservoir management; however, often produced water disposal is into shallower zones safely beneath any fresh water zones. aquifer producers intentionally producing water for re-injection to manage pressure. If possible this water will come from the reservoir itself. Using aquifer produced water rather than water from other sources is to preclude chemical incompatibility that might lead to reservoir-plugging precipitates. These wells will generally be needed only if produced water from the oil or gas producers is insufficient for reservoir management purposes. gas injectors injecting gas into the reservoir often as a means of disposal or sequestering for later production, but also to maintain reservoir pressure.Lahee classification [1] New Field Wildcat (NFW) – far from other producing fields and on a structure that has not previously produced. New Pool Wildcat (NPW) – new pools on already producing structure. Deeper Pool Test (DPT) – on already producing structure and pool, but on a deeper pay zone. Shallower Pool Test (SPT) – on already producing structure and pool, but on a shallower pay zone. Outpost (OUT) – usually two or more locations from nearest productive area. Development Well (DEV) – can be on the extension of a pay zone, or between existing wells (Infill). Cost The cost of a well depends mainly on the daily rate of the drilling rig, the extra services required to drill the well, the duration of the well program (including downtime and weather time), and the remoteness of the location (logistic supply costs).The daily rates of offshore drilling rigs vary by their capability, and the market availability. Rig rates reported by industry web service show that the deepwater water floating drilling rigs are over twice that of the shallow water fleet, and rates for jackup fleet can vary by factor of 3 depending upon capability. With deepwater drilling rig rates in 2015 of around $520,000/day, and similar additional spread costs, a deep water well of duration of 100 days can cost around US$100 million.With high performance jackup rig rates in 2015 of around $177,000, and similar service costs, a high pressure, high temperature well of duration 100 days can cost about US$30 million. Onshore wells can be considerably cheaper, particularly if the field is at a shallow depth, where costs range from less than $4.9 million to $8.3 million, and the average completion costing $2.9 million to $5.6 million per well. Completion makes up a larger portion of onshore well costs than offshore wells, which have the added cost burden of an oil platform.The total cost of an oil well mentioned does not include the costs associated with the risk of explosion and leakage of oil. Those costs include the cost of protecting against such disasters, the cost of the cleanup effort, and the hard-to-calculate cost of damage to the company's image. See also Fracking (hydraulic fracturing) Hydro-slotted perforation Offshore drilling Oil spill Petroleum industry Thermomechanical cuttings cleaner References External links Halliburton Technical Papers Archived 2018-02-02 at the Wayback Machine Freemyer Industrial Pressure Schlumberger Oilfield Glossary The History of the Oil Industry Archived 2013-04-02 at the Wayback Machine "Black Gold" Popular Mechanics, January 1930 – photo article on oil drilling in the 1920s and 1930s "World's Deepest Well" Popular Science, August 1938, article on the late 1930s technology of drilling oil wells 'Ancient Chinese Drilling' article from June 2004 CSEG Recorder Brief history of oil and gas production Mir-Babayev M.F. "Brief history of the first drilled oil well; and the people involved". Oil-Industry History (US), 2017, v. 18 #1, pp. 25–34
compressed natural gas
Compressed natural gas (CNG) is a fuel gas mainly composed of methane (CH4), compressed to less than 1% of the volume it occupies at standard atmospheric pressure. It is stored and distributed in hard containers at a pressure of 20–25 megapascals (2,900–3,600 psi), usually in cylindrical or spherical shapes. CNG is used in traditional petrol/internal combustion engine vehicles that have been modified, or in vehicles specifically manufactured for CNG use: either alone (dedicated), with a segregated liquid fuel system to extend range (dual fuel), or in conjunction with another fuel (bi-fuel). It can be used in place of petrol, diesel fuel, and liquefied petroleum gas (LPG). CNG combustion produces fewer undesirable gases than the aforementioned fuels. In comparison to other fuels, natural gas poses less of a threat in the event of a spill, because it is lighter than air and disperses quickly when released. Biomethane, biogas from anaerobic digestion or landfill, can be used. In response to high fuel prices and environmental concerns, CNG has been used in auto rickshaws, pickup trucks, transit and school buses, and trains. The cost and placement of fuel storage containers is the major barrier to wider/quicker adoption of CNG as a fuel. It is also why municipal government, public transportation vehicles were the most visible early adopters of it, as they can more quickly amortize the money invested in the new (and usually cheaper) fuel. In spite of these circumstances, the number of vehicles in the world using CNG has grown steadily (30 percent per year). Now, as a result of the industry's steady growth, the cost of such fuel storage cylinders has been brought down to a much more acceptable level. Especially, for the CNG Type 1 and Type 2 cylinders, many countries are able to make reliable and cost effective cylinders for conversion need. Energy density CNG's energy density is the same as liquefied natural gas at 53.6 MJ/kg. Its volumetric energy density, 9 MJ/L, is 42 % of that of LNG (22 MJ/L) because it is not liquefied, and is 25 percent that of diesel fuel. History Gases provided the original fuel for internal combustion engines. The first experiments with compressed gases took place in France in the 1850s. Natural gas first became a transport fuel during World War I. In the 1960s, Columbia Natural Gas of Ohio tested a CNG carrier. The ship was to carry compressed natural gas in vertical pressure bottles; however, this design failed because of the high cost of the pressure vessels. Since then, there have been attempts at developing a commercially viable CNG carrier. Several competing CNG ocean transport designs have evolved. Each design proposes a unique approach to optimizing gas transport, while using as much off-the-shelf technology as possible, to keep costs competitive. Uses Motor vehicles Worldwide, there were 14.8 million natural gas vehicles (NGVs) by 2011, with the largest numbers in Iran (4.07 million), Pakistan (2.85 million), Argentina (2.07 million), Brazil (1.7 million) and India (1.1 million), with the Asia-Pacific region leading with 5.7 million NGVs, followed by Latin America with almost four million vehicles.Several car and vehicle manufacturers, such as Fiat, Opel/General Motors, Peugeot, Volkswagen, Toyota, Honda, Maruti Suzuki, Hyundai, Tata Motors, and others, sell bi-fuel cars. In 2006 Fiat Siena Tetrafuel was introduced in the Brazilian market, equipped with a 1.4-litre FIRE engine that runs on E100, E25 (Standard Brazilian Gasoline), Ethanol, and CNG. Any existing petrol vehicle can be converted to a dual-fuel petrol/CNG vehicle. Authorized shops can do the retrofitting, which involves installing a CNG cylinder, plumbing, a CNG injection system, and electronics. The cost of installing a CNG conversion kit can often reach $8,000 on passenger cars and light trucks, and is usually reserved for vehicles that travel many miles each year. CNG costs about 50% less than petrol, and emits up to 90% fewer emissions than petrol. Locomotives CNG locomotives are operated by several railroads. The Napa Valley Wine Train in the US successfully retrofitted a diesel locomotive to run on compressed natural gas before 2002. This converted locomotive was upgraded to utilize a computer-controlled fuel injection system in May 2008, and is now the Napa Valley Wine Train's primary locomotive. Ferrocarril Central Andino in Peru, has run a CNG locomotive on a freight line since 2005. CNG locomotives are usually diesel–electric locomotives that have been converted to use compressed natural gas generators instead of diesel generators to generate the electricity that drives the traction motors. Some CNG locomotives are able to selectively fire their cylinders only when there is a demand for power, which, theoretically, gives them a higher fuel-efficiency than conventional diesel engines. CNG is also cheaper than petrol or diesel fuel. Natural gas transport CNG is used to transport natural gas by sea for intermediate distances, using CNG carrier ships, especially when the infrastructure for pipelines or LNG is not in place. At short distances, undersea pipelines are often more cost-effective, and for longer distances, LNG is often more cost-effective. Advantages Natural gas vehicles have lower maintenance costs than other hydrocarbon-fuel-powered vehicles. CNG fuel systems are sealed, preventing fuel losses from spills or evaporation. Increased life of lubricating oils, as CNG does not contaminate and dilute the crankcase oil. Being a gaseous fuel, CNG mixes easily and evenly in air. CNG is less likely to ignite on hot surfaces, since it has a high auto-ignition temperature (540 °C), and a narrow range (5–15 percent) of flammability. CNG-powered vehicles are considered to be safer than petrol-powered vehicles. Less pollution and more efficiency: CNG emits significantly less pollution directly than petrol or oil when combusted (e.g., unburned hydrocarbons (UHC), carbon monoxide (CO), nitrogen oxides (NOX), sulfur oxides (SOx) and PM (particulate matter)). For example, an engine running on petrol for 100 km produces 22 kilograms of CO2, while covering the same distance on CNG emits only 16.3 kilograms of CO2. The lifecycle greenhouse gas emissions for CNG compressed from California's pipeline natural gas is given a value of 67.70 grams of CO2-equivalent per megajoule (gCO2e/MJ) by CARB (the California Air Resources Board), approximately 28 percent lower than the average petrol fuel in that market (95.86 gCO2e/MJ). CNG produced from landfill biogas was found by CARB to have the lowest greenhouse gas emissions of any fuel analyzed, with a value of 11.26 gCO2e/MJ (more than 88 percent lower than conventional petrol) in the low-carbon fuel standard that went into effect on January 12, 2010. Due to lower carbon dioxide emissions, switching to CNG can help mitigate greenhouse gas emissions. However, natural gas leaks (both in the direct use and in the production and delivery of the fuel) represent an increase in greenhouse gas emissions. The ability of CNG to reduce greenhouse gas emissions over the entire fuel lifecycle will depend on the source of the natural gas and the fuel it is replacing. Drawbacks Compressed natural gas vehicles require a greater amount of space for fuel storage than conventional petrol-powered vehicles. Since it is a compressed gas, rather than a liquid like petrol, CNG takes up more space for each GGE (petrol gallon equivalent). However, the cylinders used to store the CNG take up space in the trunk of a car or bed of a pickup truck that has been modified to additionally run on CNG. This problem is solved in factory-built CNG vehicles that install the cylinders under the body of the vehicle, leaving the trunk free, e.g., Fiat Multipla, New Fiat Panda, Volkswagen Touran Ecofuel, Volkswagen Caddy Ecofuel, Chevy Taxi, which sold in countries such as Peru. Another option is installation on roof (typical on buses), but this could require structural modifications. In 2014, a test (by the Danish Technological Institute) of Euro6 heavy vehicles on CNG and diesel showed that CNG had higher fuel consumption, the same noise and production of CO2 and particulates, but NOX emission was lower.Leakage of unburned methane as natural gas is a significant issue because methane, the primary component of natural gas, is a powerful, short-lived greenhouse gas. It is more than 100 times more potent at trapping energy than carbon dioxide (CO2), the principal contributor to man-made climate change. Comparison with other natural gas fuels Compressed natural gas is often confused with LNG (liquefied natural gas). Both are stored forms of natural gas. The main difference is that CNG is stored at ambient temperature and high pressure, while LNG is stored at low temperature and nearly ambient pressure. In their respective storage conditions, LNG is a liquid and CNG is a supercritical fluid. CNG has a lower cost of production and storage compared to LNG as it does not require an expensive cooling process and cryogenic tanks. However, CNG requires a much larger volume to store the energy equivalent of petrol and the use of very high pressures (3000 to 4000 psi, or 205 to 275 bar). As a consequence of this, LNG is often used for transporting natural gas over large distances, in ships, trains or pipelines, where the gas is converted into CNG before distribution to the end user. Natural gas is being experimentally stored at lower pressure in a form known as an ANG (adsorbed natural gas) cylinder, where it is adsorbed at 35 bar (500 psi, the pressure of gas in natural gas pipelines) in various sponge-like materials, such as carbon and MOFs (metal-organic frameworks). The fuel is stored at similar or greater energy density than CNG. This means that vehicles can be refueled from the natural gas network without extra gas compression, the fuel cylinders can be slimmed down and made of lighter, weaker materials. It is possible to mix the ANG and CNG technology to reach an increased capacity of natural gas storage. In this process known as high pressure ANG, a high pressure CNG tank is filled by absorbers such as activated carbon (which is an adsorbent with high surface area) and stores natural gas by both CNG and ANG mechanisms.Compressed natural gas is sometimes mixed with hydrogen (HCNG), which increases the H/C ratio (hydrogen/carbon ratio) of the fuel and gives it a flame speed up to eight times higher than CNG. Codes and standards The lack of harmonized codes and standards across international jurisdictions is an additional barrier to NGV market penetration. The International Organization for Standardization has an active technical committee working on a standard for natural gas fuelling stations for vehicles.Despite the lack of harmonized international codes, natural gas vehicles have an excellent global safety record. Existing international standards include ISO 14469-2:2007 which applies to CNG vehicle nozzles and receptacle and ISO 15500-9:2012 specifies tests and requirements for the pressure regulator.The National Fire Protection Association's NFPA 52 code covers natural gas vehicle safety standards in the United States. Worldwide adoption Iran, Pakistan, Argentina, Brazil and China have the highest number of CNG run vehicles in the world.Natural gas vehicles are increasingly used in Iran, Pakistan, the Asia-Pacific region, the Indian capital of Delhi, and other large cities such as Ahmedabad, Mumbai, Pune, and Kolkata, as well as cities such as Lucknow, Kanpur, Varanasi, and others. Its use is also increasing in South America, Europe, and North America, because of rising petrol prices. Africa Egypt is amongst the top 10 countries in CNG adoption, with 128,754 CNG vehicles and 124 CNG fueling stations. Egypt was also the first nation in Africa and the Middle East to open a public CNG fueling station in January 1996.The vast majority (780,000) have been produced as dual fuel-vehicles by the auto manufacturer in the last two years, and the remainder have been converted utilizing after market conversion kits in workshops. There are 750 active refueling stations country wide with an additional 660 refueling stations under construction and expected to come on stream. Currently the major problem facing the industry as a whole is the building of refueling stations that is lagging behind dual fuel vehicle production, forcing many to use petrol instead. Nigeria CNG started with a pilot project in Benin City Edo State in 2010 by NIPCO Gas Limited. NIPCO Gas Limited is a 100% subsidiary of NIPCO PLC. As of June 2020, seven CNG stations have been built in Benin City Edo State, with about 7,500 cars running on CNG in Benin City Edo state. In Benin City Edo state, major companies such as Coca-Cola, 7up, Yongxing Steel are using CNG to power their fork-lifts/trucks while Edo City Transport Ltd (ECTS) is also running some of its buses on CNG. Kwale, Nigeria CNG stations were inaugurated by Mr. Abhishek Sharma, the head of marketing (Natural Gas) from NIPCO Gas Limited in 2019. Asia China In China, companies such as Sino-Energy are active in expanding the footprint of CNG filling stations in medium-size cities across the interior of the country, where at least two natural gas pipelines are operational. Malaysia In Malaysia, the use of CNG was originally introduced for taxicabs and airport limousines during the late 1990s, when new taxis were launched with CNG engines while taxicab operators were encouraged to send in existing taxis for full engine conversions. The practice of using CNG remained largely confined to taxicabs predominately in the Klang Valley and Penang due to a lack of interest. No incentives were offered for those besides taxicab owners to use CNG engines, while government subsidies on petrol and diesel made conventional road vehicles cheaper to use in the eyes of the consumers. Petronas, Malaysia's state-owned oil company, also monopolises the provision of CNG to road users. As of July 2008, Petronas only operates about 150 CNG refueling stations, most of which are concentrated in the Klang Valley. At the same time, another 50 were expected by the end of 2008.As fuel subsidies were gradually removed in Malaysia starting June 5, 2008, the subsequent 41 percent price hike on petrol and diesel fuel led to a 500 percent increase in the number of new CNG cylinders installed. National car maker Proton considered fitting its Waja, Saga and Persona models with CNG kits from Prins Autogassystemen by the end of 2008, while a local distributor of locally assembled Hyundai cars offers new models with CNG kits. Conversion centres, which also benefited from the rush for lower running costs, also perform partial conversions to existing road vehicles, allowing them to run on both petrol or diesel and CNG with a cost varying between RM3,500 to RM5,000 for passenger cars. Myanmar The Ministry of Transport of Myanmar passed a law in 2005 which required that all public transport vehicles – buses, trucks and taxis, be converted to run on CNG. The Government permitted several private companies to handle the conversion of existing diesel and petrol cars, and also to begin importing CNG variants of buses and taxis. Accidents and rumours of accidents, partly fueled by Myanmar's position in local hydrocarbon politics, has discouraged citizens from using CNG vehicles, although now almost every taxi and public bus in Yangon, Myanmar's largest city, run on CNG. CNG stations have been set up around Yangon and other cities, but electricity shortages mean that vehicles may have to queue up for hours to fill their gas containers. The Burmese opposition movements are against the conversion to CNG, as they accuse the companies of being proxies of the junta, and also their desire that the petrodollars earned by the regime should go towards the defense sector rather than towards improving the infrastructure or welfare of the people. India In India, there are over 4500 CNG Stations all over the country now as compared to 2014 when the country only had about 900 CNG Stations. The government is aiming to increase the use of CNG powered vehicles by setting up more CNG stations in the country, the aim is to increase the current number to 8000 CNG Stations in the next two years.As of December 2022, the state of Gujarat has the highest number of CNG Pumps in the country followed by Uttar Pradesh being the second highest and with Maharashtra falling little behind the above regions. Pakistan In Pakistan, the Karachi government under the order of the Supreme Court in 2004 made it mandatory for all city buses and auto rickshaws to run on CNG with the intention of reducing air pollution. In 2012, the federal government announced plans to gradually phase out CNG over a period of approximately three years given natural gas shortages which have been negatively affecting the manufacturing sector. Aside from limiting electricity generation capacity, gas shortages in Pakistan have also raised the costs of business for key industries including the fertilizer, cement and textile sectors. Singapore In Singapore, CNG was once used by public transport vehicles like buses and taxis, as well as goods vehicles until 2018. During its heyday in 2008 onwards, more owners of private cars had sought interest in converting their petrol-driven vehicles to also run on CNG – due to rising petrol prices. The initial cost of converting a regular vehicle to dual fuel at the German conversion workshop of C. Melchers, for example, is around S$3,800; with the promise of real cost-savings that dual-fuel vehicles bring over the long term. Singapore currently has five operating filling stations for natural gas. Sembcorp Gas Pte Ltd. runs the station on Jurong Island and, jointly with Singapore Petroleum Company, the filling station at Jalan Buroh. Both these stations are in the western part of the country. Another station on the mainland is in Mandai Link to the north and is operated by SMART Energy. SMART also own a second station on Serangoon North Ave 5 which was set up end of March 2009; The fifth and largest station in the world, located in Toh Tuck, was opened by the UNION Group in September 2009. This station is recognized by the Guniness World Records as being the largest in the world with 46 refuelling hoses. The Union Group, which operates 1000 CNG Toyota Wish taxis then planned to introduce another three daughter stations and increase the CNG taxi fleet to 8000 units. As a key incentive for using this eco-friendly fuel Singapore has a green vehicle rebate for users of CNG technology. First introduced in January 2001, the GVR grants a 40 percent discount on the OMV (open market value) cost of newly registered green passenger vehicles. This initiative will end at the end of 2012 as the government believes the 'critical mass' of CNG vehicles would then have been built up. Due to reliability issues and lower ranges that CNG provided (as cited by users’ feedback), refueling stations mostly concentrated in the western end of Singapore, the rising demand of greener solutions like hybrid technologies, led to its demise where both public buses and the last CNG taxis were on its way to being scrapped in 2018. Europe In Italy, there are more than 1173 CNG stations. The use of methane for vehicles, started in the 1930s and has continued off and on until today. Since 2008 there have been a large market expansion for natural gas vehicles (CNG and LPG) caused by the rise of petrol prices and by the need to reduce air pollution emissions. Before 1995 the only way to have a CNG-powered car was by having it retrofitted with an after-market kit. A large producer was Landi Renzo, Tartarini Auto, Prins Autogassystemen, OMVL, BiGAs,... and AeB for electronic parts used by the most part of kit producer. Landi Renzo and Tartarini selling vehicles in Asia and South America. After 1995 bi-fuel cars (petrol/CNG) became available from several major manufacturers. Currently Fiat, Opel, Volkswagen, Citroën, Renault, Volvo and Mercedes sell various car models and small trucks that are petrol/CNG powered. Usually CNG parts used by major car manufacturers are actually produced by automotive aftermarket kit manufacturers, e.g. Fiat use Tartarini Auto components, Volkswagen use Teleflex GFI and Landi Renzo components. In Belgium, CNG is a very new fuel. At the beginning of 2014 there were only 17 refuelling stations, all of them in Flanders, but the number is now increasing rapidly. At the beginning of 2015 there were 29 refueling stations in Belgium, all of them in Flanders. As of January 2017, there are 76 active refueling stations in Belgium, most of them being in Flanders since only 7 of them are in Wallonia or Brussels. As a fuel and compared to petrol, CNG has an advantageous fiscal treatment with lower excises duties (although VAT is always paid). Since CNG, as a car fuel, is not totally exempted of excise duties, CNG cars do not pay a prime road tax to partially compensate the State for the loss of revenue. Instead LPG cars pay a prime road tax in Belgium, because LPG is totally exempted from excise duties. Since CNG is not totally exempted of excise duties, in Belgium it is allowed to connect a car to the home network of natural gas and to refuel the car from home. The purchase of CNG cars is not subsidised by the government, but by the Belgian producers and distributors of natural gas. Fiat and Volkswagen sell factory-equipped CNG-cars in Belgium. At the end of 2018 there were 11,188 vehicles running with CNG in Belgium. In Germany, CNG-generated vehicles are expected to increase to two million units of motor-transport by 2020. The cost for CNG fuel is between 1/3 and 1/2 compared to other fossil fuels in Europe. In 2016 there are around 900 CNG stations in Germany and major German car manufacturers like Volkswagen, Mercedes, Opel, Audi offer CNG engines on most of their models. Augsburg is one of the few cities that only run CNG operated public buses since 2011.In Turkey, Ankara municipality is increasingly using CNG buses, where numbers have reached 1090 by 2011. Istanbul has started in 2014 with an order of 110 buses. Konya also added 60 buses to its fleet the same year.In Portugal, there are 9 CNG refueling stations as of September 25, 2017.In Hungary, there are four public CNG refueling stations in the cities Budapest, Szeged, Pécs and Győr. The public transportation company of Szeged, Szolnok and some districts in Budapest runs buses mainly on CNG.In Bulgaria, there are 96 CNG refueling stations as of July 2011. One can be found in most of Bulgaria's big towns. In the capital Sofia there are 22 CNG stations making it possibly the city with the most publicly available CNG stations in Europe. There are also quite a few in Plovdiv, Ruse, Stara Zagora and Veliko Tarnovo as well as in the towns on the Black Sea – Varna, Burgas, Nesebar and Kavarna. CNG vehicles are becoming more and more popular in the country. The fuel is mostly used by taxi drivers because of its much lower price compared to petrol. Currently (as of July, 2015) the city of Sofia is rapidly renewing its public transport fleet with MAN Lion's City buses running on CNG. Also, many companies switch to CNG cargo vans and even heavy trucks for their daily operations within city limits. In North Macedonia, there is one CNG station located in the capital Skopje, but it is not for public use. Only twenty buses of the local Public Transport Company have been fitted to use a mixture of diesel and CNG. The first commercial CNG station in Skopje is in the advanced stage of development and was expected to start operation in July 2011.In Serbia, there are about 20 public CNG refuelling stations as of August 2019. Four in the capital Belgrade, and the rest in the towns of Subotica (1), Novi Sad (1), Zrenjanin (1), Pancevo (2), Kruševac (1), Kragujevac (1), Cacak (2), and so on. Detailed list is currently available on CNGEurope Web site.In Slovenia, there are four public CNG refuelling stations as of December 2018. Two in the capital Ljubljana, and one each in Maribor and Jesenice. Additionally, at least 14 new refuelling stations are planned in all city municipalities by the end of 2020. Ljubljana Passenger Transport operates 66 CNG fuelled city buses, as of May 2016. Its Maribor counterpart, Marprom has 19 CNG city buses in their fleet, as of October 2018.In Croatia, there are two public CNG refuelling stations situated close to the center of Zagreb and in Rijeka. At least 60 CNG buses are in use as a form of a public transport (Zagreb public transport services). In Estonia, there are 11 public CNG refuelling stations – four in the country's capital Tallinn, and one each in Tartu, Pärnu, Viljandi, Rakvere, Jõhvi, and Narva. From 2011 on, Tartu has five Scania-manufactured CNG buses operating its inner-city routes. In Sweden, there are currently 90 CNG filling stations available to the public (as compared to about 10 LPG filling stations), primarily located in the southern and western parts of the country as well the Mälardalen region Another 70–80 CNG filling stations are under construction or in a late stage of planning (completions 2009–2010). Several of the planned filling stations are located in the northern parts of the country, which will greatly improve the infrastructure for CNG car users. There are approx. 14,500 CNG vehicles in Sweden (2007), of which approx. 13,500 are passenger cars and the remainder includes buses and trucks. In Stockholm, the public transportation company SL currently operates 50 CNG buses but have a capacity to operate 500. The Swedish government recently prolonged its subsidies for the development of CNG filling stations, from 2009 to 2012–31 to 2010-12-31.In Spain, CNG is a very new fuel and the refueling network is being developed. In Madrid, the EMT, uses 1915 buses running with CNG. At the beginning of 2015 there were 35 CNG refueling stations in Spain. Several car brands sell brand-new cars running with CNG, including Fiat, Volkswagen, Seat and Skoda among others.As of 2013, there are 47 public CNG filling stations in the Czech Republic, mainly in the big cities. Local bus manufacturers SOR Libchavy and Tedom produce CNG versions of their vehicles, with roof-mounted cylinders. Middle East Iran Iran has one of the largest fleets of CNG vehicles and CNG distribution networks in the world. There are 2335 CNG fueling stations, with a total of 13,534 CNG nozzles. The number of CNG burning vehicles in Iran exceeds 3.5 million. CNG consumption by Iran's transportation sector is around 20 million cubic meters per day. North America Canada Natural gas has been used as a motor fuel in Canada for over 20 years. With assistance from federal and provincial research programs, demonstration projects and NGV market deployment programs during the 1980s and 1990s, the population of light-duty NGVs grew to over 35,000 by the early 1990s. This assistance resulted in a significant adoption of natural gas transit buses as well.The NGV market started to decline after 1995, eventually reaching today's vehicle population of about 12,000.This figure includes 150 urban transit buses, 45 school buses, 9,450 light-duty cars and trucks, and 2,400 forklifts and ice-resurfacers. The total fuel use in all NGV markets in Canada was 1.9 PJs (petajoules) in 2007 (or 54.6 million liters of petrol liters equivalent), down from 2.6 PJs in 1997. Public CNG refueling stations have declined in quantity from 134 in 1997 to 72 today. There are 22 in British Columbia, 12 in Alberta, 10 in Saskatchewan, 27 in Ontario and two in Québec. There are only 12 private fleet stations.Canadian industry has developed CNG-fueled truck and bus engines, CNG-fueled transit buses, and light trucks and taxis. Fuelmaker Corporation of Toronto, the Honda-owned manufacturer of CNG auto refueling units, was forced into bankruptcy by parent Honda USA for an unspecified reason in 2009. The various assets of Fuelmaker were subsequently acquired by Fuel Systems Corporation of Santa Ana, California. United States Similar to Canada, the United States has implemented various NGV initiatives and programs since 1980, but has had limited success in sustaining the market. There were 105,000 NGVs in operation in 2000; this figure peaked at 121,000 in 2004, and decreased to 110,000 in 2009.In the United States, federal tax credits are available for buying a new CNG vehicle. Use of CNG varies from state to state; only 34 states have at least one CNG fueling site.In Texas, Railroad Commissioner David Porter launched his Texas Natural Gas Initiative in October 2013 to encourage the adoption of natural gas fuel in the transportation and exploration and production sectors. As of 2015 Texas is rapidly becoming a leader in natural gas infrastructure in the US with 137 natural gas fueling stations (private and public). Nine months into FY2015 Commissioner Porter reports Texas CNG, LNG Sales Show 78 Percent Increase Over FY 2014 year to date. Per Commissioner Porter in June 2015: "Natural gas vehicles are becoming mainstream faster than expected. These collections are nearly double the amount collected last year at this time. At 15 cents per gallon equivalent, $3,033,600 of motor fuel tax equates to the sale of 20,224,000 gallon equivalents of natural gas." The $3 million in Texas natural gas tax receipts is for both CNG and LNG for FY2015 through the May 2015. The Texas fiscal year starts Sept 1 so 9 months tax collections are represented. In Athens, Ala., the city and its Gas Department installed a public CNG station on the Interstate 65 Corridor, making it the only public CNG station between Birmingham and Nashville as of February 2014. The city's larger fleet vehicles such as garbage trucks also use this public station for fueling. The city also has two slow-fill non-public CNG stations for its fleet. Athens has added CNG/petrol Tahoes for police and fire, a CNG Honda Civic, CNG Heil garbage trucks, and CNG/petrol Dodge pickup trucks to its fleet. In California, CNG is used extensively in local city and county fleets, as well as public transportation (city/school buses). There are 90 public fueling stations in southern California alone, and travel from San Diego so the Bay Area to Las Vegas and Utah is routine with the advent of online station maps such as www.cngprices.com. Compressed natural gas is typically available for 30-60 percent less than the cost of petrol in much of California. The 28 buses running the Gwinnett County Transit local routes run on 100 percent CNG. Additionally, about half of the Georgia Regional Transportation Authority express fleet, which runs and refuels out of the Gwinnett County Transit facility, uses CNG.The Massachusetts Bay Transportation Authority was running 360 CNG buses as early as in 2007, and is the largest user in the state.The Metropolitan Transportation Authority (MTA) of New York City currently has over 900 buses powered by compressed natural gas, with CNG bus depots located in Brooklyn, The Bronx, and Queens. The Nassau Inter-County Express (or NICE Bus, formerly New York MTA Long Island Bus) runs a 100% Orion CNG-fueled bus fleet for fixed-route service, consisting of 360 buses for service in Nassau County, parts of Queens, New York, and the western sections of Suffolk County. The City of Harrisburg, Arkansas has switched some of the city's vehicles to compressed natural gas in an effort to save money on fuel costs. Trucks used by the city's street and water, sewer, and gas departments have been converted from petrol to CNG.Personal use of CNG is currently a small niche market, though with current tax incentives and a growing number of public fueling stations available, it is experiencing unprecedented growth. The state of Utah offers a subsidised statewide network of CNG filling stations at a rate of $1.57/gge, while petrol is above $4.00/gal. Elsewhere in the nation, retail prices average around $2.50/gge, with home refueling units compressing gas from residential gas lines for under $1/gge. Other than aftermarket conversions, and government used vehicle auctions, the only currently produced CNG vehicle in the United States is the Honda Civic GX sedan, which is made in limited numbers and available only in states with retail fueling outlets. An initiative, known as Pickens Plan, calls for the expansion of the use of CNG as a standard fuel for heavy vehicles has been recently started by oilman and entrepreneur T. Boone Pickens. California voters defeated Proposition 10 in the 2008 General Election by a significant (59.8 percent to 40.2 percent) margin. Proposition 10 was a $5 billion bond measure that, among other things, would have given rebates to state residents that purchase CNG vehicles. On February 21, 2013, T. Boone Pickens and New York Mayor, Michael Bloomberg unveiled a CNG powered mobile pizzeria. The company, Neapolitan Express uses alternative energy to run the truck as well as 100 percent recycled and compostable materials for their carryout boxes.Congress has encouraged conversion of cars to CNG with a tax credits of up to 50 percent of the auto conversion cost and the CNG home filling station cost. However, while CNG is much cleaner fuel, the conversion requires a type certificate from the EPA. Meeting the requirements of a type certificate can cost up to $50,000. Other non-EPA approved kits are available. A complete and safe aftermarket conversion using a non-EPA approved kit can be achieved for as little as $400 without the cylinder. Deployments AT&T ordered 1,200 CNG-powered cargo vans from General Motors in 2012. It is the largest-ever order of CNG vehicles from General Motors to date. AT&T has announced its intention to invest up to $565 million to deploy approximately 15,000 alternative fuel vehicles over a 10-year period through 2018, will use the vans to provide and maintain communications, high-speed Internet and television services for AT&T customers. South America CNG vehicles are commonly used in South America, where these vehicles are mainly used as taxicabs in main cities of Argentina and Brazil. Normally, standard petrol vehicles are retrofitted in specialized shops, which involve installing the gas cylinder in the trunk and the CNG injection system and electronics. Argentina and Brazil are the two countries with the largest fleets of CNG vehicles, with a combined total fleet of more than 3.4 million vehicles by 2009. Conversion has been facilitated by a substantial price differential with liquid fuels, locally produced conversion equipment and a growing CNG-delivery infrastructure. As of 2009 Argentina had 1,807,186 NGV's with 1,851 refueling stations across the nation, or 15 percent of all vehicles; and Brazil had 1,632,101 vehicles and 1,704 refueling stations, with a higher concentration in the cities of Rio de Janeiro and São Paulo.Colombia had an NGV fleet of 300,000 vehicles, and 460 refueling stations, as of 2009. Bolivia has increased its fleet from 10,000 in 2003 to 121,908 units in 2009, with 128 refueling stations. Peru had 81,024 NGVs and 94 fueling stations as 2009, but that number is expected to skyrocket as Peru sits on South America's largest gas reserves. In Peru several factory-built NGVs have the cylinders installed under the body of the vehicle, leaving the trunk free. Among the models built with this feature are the Fiat Multipla, the new Fiat Panda, the Volkswagen Touran Ecofuel, the Volkswagen Caddy Ecofuel and the Chevy Taxi. Other countries with significant NGV fleets are Venezuela (15,000) and Chile (8,064) as of 2009. Oceania During the 1970s and 1980s, CNG was commonly used in New Zealand in the wake of the oil crises, but fell into decline after petrol prices receded. At the peak of natural gas use, 10 percent of New Zealand's cars were converted, around 110,000 vehicles. For a period of time, Brisbane Transport in Queensland, Australia adopted a policy of purchasing only CNG buses. Brisbane Transport has 215 Scania L94UB and 324 MAN 18.310 models as well as 30 MAN NG 313 articulated CNG buses. The State Transit Authority purchased 100 Scania L113CRB, 283 Mercedes-Benz O405NH and 254 Euro 5-compliant Mercedes-Benz OC500LE buses.In the 1990s, Benders Busways of Geelong, Victoria trialled CNG buses for the Energy Research and Development Corporation. Martin Ferguson, Ollie Clark and Noel Childs featured on The 7:30 Report raised the issue of CNG as an overlooked transport fuel option in Australia, highlighting the large volumes of LNG currently being exported from the North West Shelf in light of the cost of importing crude oil to Australia. == References ==
california air resources board
The California Air Resources Board (CARB or ARB) is an agency of the government of California that aims to reduce air pollution. Established in 1967 when then-governor Ronald Reagan signed the Mulford-Carrell Act, combining the Bureau of Air Sanitation and the Motor Vehicle Pollution Control Board, CARB is a department within the cabinet-level California Environmental Protection Agency. The stated goals of CARB include attaining and maintaining healthy air quality; protecting the public from exposure to toxic air contaminants; and providing innovative approaches for complying with air pollution rules and regulations. CARB has also been instrumental in driving innovation throughout the global automotive industry through programs such as its ZEV mandate. One of CARB's responsibilities is to define vehicle emissions standards. California is the only state permitted to issue emissions standards under the federal Clean Air Act, subject to a waiver from the United States Environmental Protection Agency. Other states may choose to follow CARB or the federal vehicle emission standards but may not set their own. Governance CARB's governing board is made up of 16 members, with 2 non-voting members appointed for legislative oversight, one each by the California State Assembly and Senate. 12 of the 14 voting members are appointed by the governor and subject to confirmation by the Senate: five from local air districts, four air pollution subject-matter experts, two members of the public, and the Chair. The other two voting members are appointed from environmental justice committees by the Assembly and Senate.Five of the governor-appointed board members are chosen from regional air pollution control or air quality management districts, including one each from: Bay Area AQMD (San Francisco Bay Area), currently John Gioia San Diego County APCD, currently Nathan Fletcher San Joaquin Valley APCD, currently Alexander Sherriffs, M.D. South Coast AQMD, currently Judy Mitchell A Sacramento-area district: Sacramento Metropolitan AQMD, Yolo-Solano AQMD, Placer County APCD, Feather River AQMD, or El Dorado County AQMD, currently Phil SernaFour governor-appointed board members are subject matter experts in specific fields: automotive engineering, currently Dan Sperling; science, agriculture, or law, currently John Eisenhut; medicine, currently John R. Balmes, M.D.; and air pollution control. The governor is also responsible for two appointees from members of the public, and the final governor appointee is the Board's Chair. The first Chair of CARB was Dr. Arie Jan Haagen-Smit, who was previously a professor at the California Institute of Technology and started research into air pollution in 1948. Dr. Haagen-Smit is credited with discovering the source of smog in California, which led to the development of air pollution controls and standards.The two legislature-appointed board members work directly with communities affected by air pollution. They are currently Diane Takvorian and Dean Florez, appointed by the Assembly and Senate respectively. Organizational structure CARB is a part of the California Environmental Protection Agency, an organization which reports directly to the Governor's Office in the Executive Branch of California State Government.CARB has 15 divisions and offices: Office of the Chair Executive Office Office of Community Air Protection Air Quality Planning and Science Division Emission Certification and Compliance Division Enforcement Division Industrial Strategies Division Mobile Source Control Division Mobile Source Laboratory Division Research Division Sustainable Transportation and Communities Division Transportation and Toxics Division Office of Information Services Administrative Services Division Air Quality Planning and Science Division The division assesses the extent of California's air quality problems and the progress being made to abate them, coordinates statewide development of clean air plans and maintains databases pertinent to air quality and emissions. The division's technical support work provides a basis for clean air plans and CARB's regulatory programs. This support includes management and interpretation of emission inventories, air quality data, meteorological data and of air quality modeling.The Air Quality Planning and Science Division has five branches: Special Assessment Branch Emission Inventory and Economic Analysis Branch Modeling & Meteorology Branch Air Quality Planning Branch Mobile Source Analysis Branch Consumer Products and Air Quality Assessment Branch Atmospheric Modeling & Support Section The Atmospheric Modeling & Support Section is one of three sections within the Modeling & Meteorology Branch. The other two sections are the Regional Air Quality Modeling Section and the Meteorology Section.The air quality and atmospheric pollution dispersion models routinely used by this Section include a number of the models recommended by the U.S. Environmental Protection Agency (EPA). The section uses models which were either developed by CARB or whose development was funded by CARB, such as: CALPUFF – Originally developed by the Sigma Research Company (SRC) under contract to CARB. Currently maintained by the TRC Solution Company under contract to the U.S. EPA. CALGRID – Developed by CARB and currently maintained by CARB. SARMAP – Developed by CARB and currently maintained by CARB. Role in reducing greenhouse gases The California Air Resources Board is charged with implementing California's comprehensive suite of policies to reduce emissions of greenhouse gases. In part due to the efforts of CARB, California has successfully decoupled greenhouse gas emissions from economic growth, and achieved its goal of reducing emissions to 1990 levels four years earlier than the target date of 2020. Alternative Fuel Vehicle Incentive Program Alternative Fuel Vehicle Incentive Program (also known as Fueling Alternatives) is funded by the California Air Resources Board (CARB), offered throughout the State of California and administered by the California Center for Sustainable Energy (CCSE). Low-Emission Vehicle Program The CARB first adopted the Low-Emission Vehicle (LEV) Program standards in 1990 to address smog-forming pollutants, which covered automobiles sold in California from 1994 through 2003. An amendment to the LEV Program, known as LEV II, was adopted in 1999, and covered vehicles for the 2004 through 2014 model years. Greenhouse gas (GHG) emission regulations were adopted in 2004 starting for the 2009 model year, and are named the "Pavley" standards after Assemblymember Fran Pavley, who had written Assembly Bill 1493 in 2002 to establish them. A second amendment, LEV III, was adopted in 2012, and covers vehicles sold from 2015 onward for both smog (superseding LEV II) and GHG (superseding Pavley) emissions. The rules created under the LEV Program have been codified as specific sections in Title 13 of the California Code of Regulations; in general, LEV I is § 1960.1; LEV II is § 1961; Pavley is § 1961.1; LEV III is § 1961.2 (smog-forming pollutants) and 1961.3 (GHG). The ZEV regulations, which were initially part of LEV I, have been broken out separately into § 1962.For comparison, the average new car sold in 1965 would produce approximately 2,000 lb (910 kg) of hydrocarbons over 100,000 mi (160,000 km) of driving; under the LEV I standards, the average new car sold in 1998 was projected to produce hydrocarbon emissions of 50 lb (23 kg) over the same distance, and under LEV II, the average new car in 2010 would further reduce hydrocarbon emissions to 10 lb (4.5 kg). Required labeling In 2005, the California State Assembly passed AB 1229, which required all new vehicles manufactured after January 1, 2009 to bear an Environmental Performance Label, which scored the emissions performance of the vehicle on two scales ranging between 1 (worst) and 10 (best): one for global warming (emissions of GHG such as N2O, CH4, air conditioning refrigerants, and CO2) and one for smog-forming compounds (non-methane organic gases (NMOG), NOx, and HC). The Federal Government followed suit and required a similar "smog score" on new vehicles sold starting in 2013; the standards were realigned for labels applied to 2018 model year vehicles. Vehicle categories The LEV program has established several categories of reduced emissions vehicles. LEV I defined LEV and ULEV vehicles, and added TLEV and Tier 1 temporary classifications that would not be sold after 2003. LEV II added SULEV and PZEV vehicles, and LEV III tightened emission standards. The actual emission levels depend on the standards in use. LEV (Low Emission Vehicle): The least stringent emission standard for all new cars sold in California beyond 2004. ULEV (Ultra Low Emission Vehicle): 50% cleaner than the average new 2003 model year vehicle. SULEV (Super Ultra Low Emission Vehicle): These vehicles emit substantially lower levels of hydrocarbons, carbon monoxide, oxides of nitrogen and particulate matter than conventional vehicles. They are 90% cleaner than the average new 2003 model year vehicle.LEV I defined emission limits for several different classes of vehicle, including passenger cars (PC), light-duty trucks (LDT), and medium-duty vehicles (MDV). Heavy-duty vehicles were specifically excluded from LEV I. LEV I also defined a loaded vehicle weight (LVW) as the vehicle's Curb weight plus an allowance of 300 lb (140 kg). In general, the most stringent standards were applied to passenger cars and light-duty trucks with a LVW up to 3,750 lb (1,700 kg) (these "light" LDTs were later denoted LDT1 under LEV II). LEV II increased the scope of vehicles classed as light-duty trucks to encompass a higher GVWR up to 8,500 lb (3,900 kg), compared to the LEV I standard of 6,000 lb (2,700 kg). In addition, LEV I had defined less stringent limits for heavier LDTs (denoted LDT2 with a LVW 3,751–5,750 lb (1,701–2,608 kg)); LEV II closed that discrepancy and defined a single emissions standard for all PCs and LDTs. Under LEV III, medium-duty passenger vehicles (MDPV) were brought under the most stringent standards alongside PCs and LDTs. Smog-forming compound emissions limits Rather than providing a single standard for vehicles based on age, purpose, and weight, the LEV I standards introduced different tiers of limits for smog-forming compound emissions starting in the 1995 model year. After 2003, LEV was the minimum standard to be met. Greenhouse gas emissions limits CARB adopted regulations for limits on greenhouse gas emissions in 2004 starting with the 2009 model year to support the direction provided by AB 1493. In June 2005, Governor Arnold Schwarzenegger signed Executive Order S-03-05, which required a reduction in California GHG emissions, targeting an 80% reduction compared to 1990 levels by 2050. Assembly Bill 32, better known as the California Global Warming Solutions Act of 2006, codified these requirements.CARB filed a waiver request with the United States Environmental Protection Agency (EPA) under Section 209(b) of the Clean Air Act in December 2005 to permit it to establish limits on greenhouse gas emissions; although the waiver request was initially denied in March 2008, it was later approved on June 30, 2009 after President Barack Obama signed a Presidential Memorandum directing the EPA to reconsider the waiver. In the initial denial, EPA Administrator Stephen L. Johnson stated the Clean Air Act was not "intended to allow California to promulgate state standards for emissions from new motor vehicles designed to address global climate change problems" and further, that he did not believe "the effects of climate change in California are compelling and extraordinary compared to the effects in the rest of the country." Johnson's successor, Lisa P. Jackson, signed the waiver overturning Johnson's denial, writing that "EPA must grant California a waiver if California determines that its standards are, in the aggregate, at least as protective of the public health and welfare as applicable Federal standards." Jackson also noted that in the history of the waiver process, over 50 waivers had been granted and only one had been fully denied, namely the March 2008 denial of the GHG emissions regulation. CARB decided to adopt regulation of GHG emissions under Executive Order G-05-061, which provided phase-in targets for fleet average GHG emissions in CO2-equivalent grams per mile starting with the 2009 model year. The calculation of CO2-equivalent emissions was based on contributions from four different chemicals: CO2, N2O, CH4, and air conditioning refrigerants. The emissions in g/mi CO2-equivalent are calculated according to the formula C O 2 e q u i v a l e n t = C O 2 + 296 × N 2 O + 23 × C H 4 − A C d i r e c t − A C i n d i r e c t {\displaystyle CO_{2}^{\mathrm {equivalent} }=CO_{2}+296\times N_{2}O+23\times CH_{4}-AC^{\mathrm {direct} }-AC^{\mathrm {indirect} }} , which has two terms for direct and indirect emissions allowances of air conditioning refrigerants, depending on the refrigerant used, such as HFC134a, and the system design. Vehicles powered by alternative fuels use a slightly modified formula, C O 2 e q u i v a l e n t = ( C O 2 + A C i n d i r e c t ) × F + 296 × N 2 O + 23 × C H 4 + A C d i r e c t {\displaystyle CO_{2}^{\mathrm {equivalent} }=(CO_{2}+AC^{\mathrm {indirect} })\times F+296\times N_{2}O+23\times CH_{4}+AC^{\mathrm {direct} }} , where F F is a fuel adjustment factor depending on the alternative fuel used (1.03 for natural gas, 0.89 for LPG, and 0.74 for E85). ZEVs are also required to calculate GHG as the processes to generate the energy (or fuel) used also produce GHG. For ZEVs, C O 2 e q u i v a l e n t = U + A C d i r e c t {\displaystyle CO_{2}^{\mathrm {equivalent} }=U+AC^{\mathrm {direct} }} , where U U is the upstream emissions factor (130 g/mi for battery electric vehicles, 210 for hydrogen/fuel cell, and 290 for hydrogen/internal combustion). Direct CO2 emissions could be calculated in a relatively straightforward fashion based on fuel consumption. Manufacturers that do not wish to measure N2O emissions may assume a value of 0.006 g/mi. An update was issued in 2010 which allowed manufacturers to calculate GHG emissions using CAFE data; for conventionally powered vehicles, the contribution from the nitrous oxide and methane terms could be assumed to be 1.9 g/mi.CARB voted unanimously in March 2017 to require automakers to average 54.5 miles per US gallon (4.32 L/100 km; 65.5 mpg‑imp) for new cars in 2025. Section 177 states Because California had emissions regulations prior to the 1977 Clean Air Act, under Section 177 of that bill, other states may adopt the more stringent California emissions regulations as an alternative to federal standards. Thirteen other states and the District of Columbia have chosen to do so, and ten of those have additionally adopted the California Zero-Emission Vehicle regulations. In December 2020, Minnesota announced its intention to adopt California LEV and ZEV rules; following a hearing before an administrative law judge in February 2021, the Minnesota Pollution Control Agency adopted the California regulations. In August 2022, Virginia, citing to a 2021 law, announced it would follow California regulations for ZEV registrations.Arizona and New Mexico had previously adopted California LEV regulations under Section 177, but later repealed those states' clean car standards in 2012 and 2013, respectively.In Canada, the province of Quebec adopted CARB standards effective in 2010. CARB and the Government of Canada entered into a Memorandum of Understanding in June 2019 to cooperate on greenhouse gas emissions mitigation. Zero-Emission Vehicle Program The CARB Zero-Emission Vehicle (ZEV) program was enacted by the California government starting in 1990 to promote the use of zero emission vehicles. The program goal is to reduce the pervasive air pollution affecting the main metropolitan areas in the state, particularly in Los Angeles, where prolonged pollution episodes are frequent. The California ZEV rule was first adopted by CARB as part of the 1990 Low-Emission Vehicle (LEV I) Program. The focus of the 1990 rules (ZEV-90) was to meet air quality standards for ozone rather than the reduction of greenhouse gas (GHG) emissions.: 5 Under LEV II in 1999, the ZEV regulations were moved to a separate section (13 CCR § 1962) and the requirements for ZEVs as a percentage of fleet sales was made more formal. Executive Order S-03-05 (2005) and Assembly Bills 1493 (2002) and 32 (2006) prompted CARB to reevaluate the ZEV program as last amended in 1996, which had been primarily concerned with reducing emissions of smog-forming pollutants. By the time AB 32 passed in 2006, vehicles complying with PZEV and AT PZEV standards had become commercially successful, and the ZEV program could then shift towards reducing both smog-forming compounds and greenhouse gases.The next set of ZEV regulations were adopted in 2012 with LEV III. CARB put both LEV and ZEV rules together as the Advanced Clean Cars Program (ACC), adopted in 2012, which included regulations for cars sold through the 2025 model year. The regulations include updates to regulations for LEV III (for smog-forming emissions), LEV III GHG (for greenhouse gas emissions), and ZEV. Since then, in September 2020 Governor Gavin Newsom signed an executive order directing that by 2035, all new cars and passenger trucks sold in California will be zero-emission vehicles. Executive Order N-79-20 directs CARB to develop regulations to require that ZEVs be an increasing share of new vehicles sold in the state, with light-duty cars and trucks and off-road vehicles and equipment meeting the 100% ZEV goal by 2035 and medium and heavy-duty trucks and buses meeting the same 100% ZEV goal by 2045. The order also directs Caltrans to develop near-term actions to encourage "an integrated, statewide rail and transit network" and infrastructure to support bicycles and pedestrians. In response, CARB began development of the Advanced Clean Cars II (ACC II) Program, focusing on emissions of vehicles sold after 2025. ACC II is scheduled for consideration before CARB in June 2022. Vehicle definitions LEV I defined a ZEV as one that produces "zero emissions of any criteria pollutants under any and all possible operational modes and conditions." A vehicle could still qualify as a ZEV with a fuel-fired heater, as long as the heater was unable to be operated at ambient temperatures above 40 °F (4 °C) and did not have any evaporative emissions.: 2–6, 2–7  Under LEV II (ZEV-99), the ZEV definition was updated to include precursor pollutants, but did not consider upstream emissions from power plants.: C-1 The ZEV regulation has evolved and been modified several times since 1990, and several new partial or low-emission categories were created and defined, including the introduction of PZEV and AT PZEV categories in ZEV-99.: B-1, B-2  PZEV (Partial Zero Emission Vehicle): Meets SULEV tailpipe standards, has a 15-year / 150,000 mile warranty, and zero evaporative emissions. These vehicles are 80% cleaner than the average 2002 model year car. AT PZEV (Advanced Technology PZEV): These are advanced technology vehicles that meet PZEV standards and include ZEV enabling technology, typically hybrid electric vehicles (HEV). They are 80% cleaner than the average 2002 model year car. ZEV (Zero Emission Vehicle): Zero tailpipe emissions, and 98% cleaner than the average new 2003 model year vehicle. Manufacturer sales volume Under ZEV-90, CARB classified manufacturers according to the average sales per year between 1989 and 1993; small volume manufacturers were those that sold 3,000 or fewer new vehicles per year; intermediate volume manufacturers sold between 3,001 and 35,000; and large volume manufacturers sold more than 35,000 per year.: 2–3  For large volume manufacturers, CARB required that 2% of 1998 to 2000 model year vehicles sold were ZEVs, ramping up to 5% ZEVs by 2001 and 10% ZEVs in 2003 and beyond. Intermediate volume manufacturers were not required to meet the goals until 2003, and small volume manufacturers were exempted. These percentages were calculated based on total production of passenger cars and light-duty trucks with a loaded vehicle weight (LVW) less than 3,750 lb (1,700 kg).: 3-22 to 3-24 ZEV credit system The LEV I rules also introduced the concept of emission credits. Under LEV I, the vehicle fleet average emissions rate of non-methane organic gases (NMOG) produced by a manufacturer was required to meet increasingly stringent requirements starting in 1994.: 3–18  The calculation of fleet average NMOG emissions was based on a weighted sum of vehicle NMOG emissions, based on the number sold and type of certification (i.e., TLEV, LEV, ULEV, etc.), divided by the total number of vehicles produced, including ZEVs.: 3–20  Manufacturers whose fleet average NMOG emissions met or exceeded the NMOG emissions goal would be subjected to civil penalties; those which fell below the goal would receive credits, which could then be marketed to other manufacturers.: 3–24 The 1996 amendments to the ZEV regulations in LEV I (ZEV-96) introduced credits where a ZEV could be counted more than once based on vehicle range or battery specific energy to encourage deployment of ZEVs prior to 2003.: 3–4 Under LEV II/ZEV-99, the PZEV and AT PZEV categories were introduced, and the percentage of ZEVs sold by a manufacturer could be partially met by the sales of PZEV and AT PZEVs.: C-2  If a vehicle met PZEV criteria, it qualified for a credit equal to 0.2 of one ZEV for the purposes of calculating that manufacturer's ZEV production.: C-6  AT PZEVs capable of traveling with zero emissions for a limited range were allowed additional credit if the urban all-electric range was at least ten miles.: C-7  ZEVs that were introduced prior to 2003 received a multiplier, with a value ranging up to 10× a single ZEV depending on the all-electric range and fast-charging capability.: C-11, C-12 MOA demonstration fleet In March 1996, ZEV-96 eliminated the ZEV ramp-up planned to start in 1998, but the goal of 10% ZEVs by 2003 was retained, with credits granted for sales of partial ZEVs (PZEVs). According to comment responses, CARB determined that advanced batteries would not be ready in time to meet the ZEV requirements until at least 2003.: 6–7  In conjunction with relaxing the requirements in ZEV-96, CARB signed memoranda of agreement (MOAs) with the seven large scale manufacturers to begin rolling out demonstration fleets of ZEVs with limited public availability in the near term. The GM EV1 was the first battery electric vehicle (BEV) offered to the public, in partial fulfillment of the agreement with CARB. The EV1 was available only through a US$399 (equivalent to $740 in 2022)/month lease starting in December 1996; the initial markets were South Coast, San Diego, and Arizona, and expanded to Sacramento and the Bay Area. GM also offered an electric S-10 pickup truck to fleet operators.In 1997, Honda (EV Plus, May 1997), Toyota (RAV4 EV, October 1997), and Chrysler (EPIC, 1997) followed suit. Ford also introduced the Ranger EV for the 1998 model year, and Nissan stated they planned to offer the Altra in the 1998 model year as well to fulfill the MOA. As an acceptable alternative, Mazda stated they would purchase ZEV credits from Ford.: 7–10 Advanced Clean Cars The Low-Emission Vehicle Program was revised to define modified ZEV regulations for 2015 models. CARB estimates that ACC will result in 10% of all sales to be ZEVs by 2025.: 5  The share remained at 3% between 2014 and 2016. Battery vehicles receive 3 or 4 credits, while fuel cell cars receive 9. As of 2016, a credit has a market value of $3-4,000, and some automakers have more credits than required.CARB held a public workshop in September 2020 where several new consumer-friendly regulations for ZEVs were proposed to improve adoption: Standardization of a DC Fast Charge inlet (proposing to use CCS Combo 1, with adapters provided by the vehicle manufacturer if applicable) Standardization of vehicle and battery data (to assist assessment of need for repairs/condition) Implement a standardized battery state-of-health (SOH) indicator (using SAE J1634 dynamometer testing to define battery capacity) and define a value of battery SOH that qualifies for warranty repair Make ZEV powertrain service and repair information available to independent technicians and repair shops (including standardization of communication protocols for vehicle data)In May 2021, additional draft requirements were added: Durability: BEVs to maintain 80% of certified range for 15 years/150,000 miles Durability: FCEVs to maintain 90% of fuel cell system output power after 4,000 hours of operation Battery Labelling: standardized content to improve the efficiency of recycling batteries to recover materials or potential repurposingTo improve access to ZEVs, CARB added proposed environmental justice (EJ) credits in August 2021 for manufacturers who improve options for clean transportation to underserved communities, such as by providing a discount on a ZEV that would be used in a community-based clean mobility program. The August workshop also included additional regulations for ZEVs: Range: starting in 2026, minimum (2-cycle) range to be 200 mi (320 km) On-board charger: minimum 5.76 kW for AC (Level 2) charging, sufficient for a BEV to charge overnight (8 hours) from a 30A source The final workshop in October 2021 proposed that ZEVs would be taken out of fleet calculations for vehicle emissions and provided yearly targets for ZEV vehicle sales as a percent of total sales, including potential EJ credits. Additionally, the required warranty period and requirements to take credit for PHEV sales were defined: Battery to retain ≥ 80% SoH for 8 years/100,000 miles PHEVs to meet one of two requirements: Transitional PHEVs (2026–28): minimum 30 mi (48 km) all-electric range with additional credit if vehicle exceeds 10 mi (16 km) on the US06 high speed/acceleration cycle; 8 year/100,000 80%SOH battery warranty, 5.76 kW on-board charger Full credit PHEVs (2026+): minimum 50 mi (80 km) all-electric range, minimum 40 mi (64 km) on the US06 high speed/acceleration cycle; 8 year/100,000 80%SOH battery warranty, 5.76 kW on-board charger "Small volume" manufacturers (defined as those selling fewer than 4,500 cars per year) are required to comply with the ZEV mandate starting with the 2035 model year Hybrid and Zero-Emission Truck and Bus Voucher Incentive Project California Hybrid and Zero-Emission Truck and Bus Voucher Incentive Project (HVIP for short) offers up-front discounts on medium and heavy duty electric trucks. Additionally, discounts will be increased for public transit agencies, school buses for public school districts, and vehicles operating in disadvantaged communities. For example, a public school district could receive up to $198,000 off the price of a new electric bus; a public transit agency could receive $69,000 off the price of a new Class 4 electric shuttle. Launched by the California Air Resources Board in 2009, the project is part of California Climate Investments. OHV Emission Standards The California DMV implements the policy dictates of the California Air Resources Board (CARB) with respect to registration of off-highway motor vehicles (OHVs). Registration consists of ID plates or placards issued by the DMV. Operating a motorized vehicle off-highway in California requires either a Green Sticker or a Red Sticker ID. The Green Sticker indicates that the vehicle has passed emission requirements. The Red Sticker (issued through 2021) restricts OHV use due to not meeting emission standards established by the CARB. The red sticker program began in 1994 when CARB adopted standards for emissions from two-stroke engines used primarily on dirt bikes. Between 1998 and 2003 the red sticker program was refined allowing vehicles that did not meet peak ozone season standards to be operated only at specific times of the year. As of model year 2022 the CARB no longer authorizes issuing of red stickers. Low-carbon fuel standard The Low-Carbon Fuel Standard (LCFS) requires oil refineries and distributors to ensure that the mix of fuel they sell in the Californian market meets the established declining targets for greenhouse gas emissions measured in CO2-equivalent grams per unit of fuel energy sold for transport purposes. The 2007 Governor's LCFS directive calls for a reduction of at least 10% in the carbon intensity of California's transportation fuels by 2020. These reductions include not only tailpipe emissions but also all other associated emissions from production, distribution and use of transport fuels within the state. Therefore, California LCFS considers the fuel's full life cycle, also known as the "well to wheels" or "seed to wheels" efficiency of transport fuels. The standard is aimed to reduce the state’s dependence on petroleum, create a market for clean transportation technology, and stimulate the production and use of alternative, low-carbon fuels in California.On April 23, 2009, CARB approved the specific rules for the LCFS that will go into effect in January 2011. The rule proposal prepared by its technical staff was approved by a 9-1 vote, to set the 2020 maximum carbon intensity reference value to 86 grams of carbon dioxide released per megajoule of energy produced. PHEV Research Center The PHEV Research Center was launched with funding from the California Air Resources Board. Innovative Clean Transit Under the Innovative Clean Transit (formerly known as the Advanced Clean Transit) regulation adopted in December 2018, public transportation agencies in California will gradually transition to a zero-emission bus fleet by 2040. Large transit agencies (defined as those operating more than 65 buses in the San Joaquin Valley Air Basin or South Coast Air Quality Management District, or those operating more than 100 buses elsewhere with populations greater than 200,000) are required to have 25% of new bus purchases as zero-emission buses (ZEBs) starting in 2023, 50% of new purchases as ZEBs starting in 2026, and 100% of new purchases as ZEBs starting in 2029. Small transit agencies are required to make 25% of new purchases as ZEBs in 2026 and 100% of new purchases as ZEBs in 2029+. Per the regulation, ZEBs are defined to include battery electric buses and fuel cell buses, but do not include electric trolleybuses which draw power from overhead lines. The Antelope Valley Transit Authority has set a goal to be the first all-electric fleet by the end of 2018, ahead of the tightened regulations. Regulation of ozone produced by air cleaners and ionizers The California Air Resources Board has a page listing air cleaners (many with ionizers) meeting their indoor ozone limit of 0.050 parts per million. From that article: All portable indoor air cleaning devices sold in California must be certified by the California Air Resources Board (CARB). To be certified, air cleaners must be tested for electrical safety and ozone emissions, and meet an ozone emission concentration limit of 0.050 parts per million. For more information about the regulation, visit the air cleaner regulation. Southern California headquarters, Mary D. Nichols Campus On October 27, 2017 CARB broke ground on its new state-of-the-art Southern California headquarters. CARB chose the site near the University of California, Riverside, in March 2016 and completed environmental studies in June 2017. Construction costs of $419 million, which include $108 million for specialized laboratory and testing equipment, were approved by the Legislature in July. Of those costs, $154 million comes from fines paid by Volkswagen for air quality violations related to the diesel car cheating case. Additional funds will come from the Motor Vehicle Account, the Air Pollution Control Fund and the Vehicle Inspection Repair Fund.Over a decade of planning has gone into the development of a replacement for CARB’s aging Haagen-Smit Laboratory. Opened in 1973 in El Monte, California, the Haagen-Smit Laboratory is the site of many of CARB’s groundbreaking efforts to reduce the emissions of cars and trucks, as well as efforts to introduce zero-emission and plug-in vehicles to California. In 2015, engineers and technicians based at the Haagen-Smit Laboratory were instrumental in discovering the infamous VW diesel “defeat device,” leading to the largest emissions control violation settlement in national and California history. The new campus features an extended range of dedicated test cells, including heavy-duty testing. There is also workspace for accommodating new test methods for future generations of vehicles, and space for developing enhanced on-board diagnostics and portable emissions measurement systems. The facility also includes a separate advanced chemistry laboratory. The Southern California Headquarters’ office and administration space accommodates 460 employees and includes visitor reception and public areas, a press room, flexible conference and workshop space, and a 250-person public auditorium. Sustainability drove the striking architecture and every detail of the campus. Designed by ZGF Architects and built by Hensel Phelps, the new headquarters is built for the future. At 402,000 square feet, it is designed to be the largest Zero Net Energy building in the United States, aided by solar arrays throughout the campus that generate 3.5 Megawatts of electricity, and a chilled beam temperature management system that provides increased energy efficiency and occupant comfort. As a result, the facility achieves Leadership in Energy and Environmental Design (LEED) Platinum certification, and California Green Building Standards Code (CALGreen) Tier 2 standards and is designed to achieve Zero-Net Energy performance . On November 18, 2021, CARB dedicated the new Southern California headquarters in honor of former Chair Mary D. Nichols whose career at CARB spanned four decades under three different California governors. See also California Air Resources BoardList of California Air Districts 2008 California Statewide Truck and Bus Rule Carl Moyer Memorial Air Quality Standards Attainment ProgramOther References External links Official website Title 13 Motor Vehicles, Division 3 regulations in the California Code of Regulations (CCR) from Westlaw Title 17 Public Health, Division 3 regulations in the CCR from Westlaw CARB's Low-Emission Vehicle Regulations and Test Procedures CARB web site page on Climate Change CARB's Diesel Emission Control Strategies VerificationNewsCalifornia charts course to fight global warming: California's greenhouse gas emissions by 30 percent over the next 12 years. California air board announces plan for carbon-credit trading.
willow project
The Willow project is an oil drilling project by ConocoPhillips located on the plain of the North Slope of Alaska in the National Petroleum Reserve in Alaska. The project was originally to construct and operate up to five drill pads for a total of 250 oil wells. Associated infrastructure includes access and infield roads, airstrips, pipelines, a gravel mine and a temporary island to facilitate module delivery via sealift barges on permafrost and between waters managed by the state of Alaska. Oil was discovered in the Willow prospect area west of Alpine, Alaska, in 2016, and in October 2020, the Bureau of Land Management (BLM) approved ConocoPhillips' Willow development project in its Record of Decision. After a court challenge in 2021, the BLM issued its final supplemental environmental impact statement (SEIS) in February 2023. Alaskan lawmakers from both sides, as well as the Arctic Slope Regional Corporation, have supported the Willow project. On March 13, 2023, the Biden administration approved the project. Environmentalist organization Earthjustice filed a lawsuit on March 14, 2023, on behalf of conservation groups to stop the Willow project, saying that the approval of a new carbon pollution source contradicts President Joe Biden's promises to slash greenhouse gas emissions in half by 2030 and transition the United States to clean energy. The project could produce up to 750 million barrels of oil and 287 million tons of carbon emissions plus other greenhouse gases over 30 years, and could adversely impact Arctic wildlife and Native American communities. The Willow project would damage the complex local tundra ecosystem and, according to an older government estimate, release the same amount of greenhouse gasses annually as half a million homes. Geography The Willow project is located on the plain of the North Slope of Alaska, within the National Petroleum Reserve in Alaska, in a part called the Bear Tooth Unit West of Alpine, Alaska on native lands. It is located on Arctic coastal tundra less than 30 miles (48 km) from the Arctic Ocean and entirely on the arctic coastal plain, as depicted in Figure 3.9.2 of the final supplemental environmental impact statement (SEIS). This land consists of permafrost tundra, 94% of which is wetlands and 5% freshwater.: 16 Expected oil extraction Over its anticipated 30-year life, the Willow project could produce 200,000 barrels of oil per day, producing up to 600 million barrels of oil in total. According to estimates by the Bureau of Land Management (BLM), Willow could generate between $8 and $17 billion in revenue. The BLM's environmental impact statement found it would result in 287 million tons of carbon emissions plus other greenhouse gases. In June 2021, officials at ConocoPhillips stated it had, "identified up to 3 billion barrels of oil equivalent of nearby prospects and leads with similar characteristics that could leverage the Willow infrastructure...[Willow] unlocks the West". History In 1999, ConocoPhillips acquired the first Willow-area leases in the northeast portion of the National Petroleum Reserve in Alaska called the Bear Tooth Unit.In 2016, the final year of the Obama administration, ConocoPhillips drilled two oil exploration wells, which encountered "significant pay". It named this discovery Willow. In 2018, the second year of the Trump administration, it appraised the greater Willow area and discovered three additional oil prospects.In May 2018, ConocoPhillips officially requested permission to develop the Willow prospect from the BLM, to construct and operate five drill pads with 50 oil wells each for a total of 250 oil wells including access and infield roads, airstrips, pipelines, a gravel mine and a temporary island to facilitate module delivery via sealift barges.In August 2019, after a 44-day public scoping period and having consulted with 13 tribal entities and Alaska Native Claims Settlement Act corporations, the BLM published a draft master development plan.In August 2020, during the last quarter of the Trump administration, the BLM approved the development of the ConocoPhillips project option. It foresees the construction of a new road. Although a roadless option would have aided caribou movements in the area, the BLM in its Willow master development project Record of Decision, published in October 2020, sided against the roadless option, because it felt the increase in air traffic would increase the overall disturbance.: 7  ConocoPhillips plans to use thermosiphons to freeze the melting permafrost ground, to keep it solid for the oil development infrastructure. Construction at that time was expected to take about nine years, to employ up to 1,650 seasonal workers, an average of 373 annual workers and about 406 full-time employees once operational. In August 2021, the U.S. District Court for the District of Alaska challenged the BLM permit for the Willow project, because it "1) improperly excluded analysis of foreign greenhouse gas emissions, 2) improperly screened out alternatives from detailed analysis based on BLM's misunderstanding of leaseholders' rights (i.e., that leases purportedly afforded the right to extract 'all possible' oil and gas from each lease tract), and 3) failed to give due consideration to the requirement in the NPRPA to afford 'maximum protection' to significant surface values in the Teshekpuk Lake Special Area".: 3  According to documents received under the Freedom of Information Act, ConocoPhillips was then involved in analyzing the court's decision and participated in developing the next supplemental review.In July 2022, the BLM released a draft SEIS in response to the District Court order.In August 2022, the Alaska Native corporation of the village of Nuiqsut submitted comments to the draft SEIS favoring a reduced number of drill pads from five to four, shorter gravel roads and protection of Teshekpuk Lake.: 48–66 On November 9, 2023, U.S. District Court Judge Sharon Gleason upheld the Biden administration's approval of the Willow project and rejected claims by an Iñupiat group and environmentalists against it. Earthjustice, one of the organizations bringing the lawsuit, has announced its intention to appeal the decision. Government approval, 2023 On February 1, 2023, the BLM completed the final SEIS, approving the project with three drill pads with 50 oil wells each for a total of 150 oil wells. Alaskan lawmakers from both sides, including the congressional delegation (Senators Lisa Murkowski (R), Dan Sullivan (R) and Representative Mary Peltola (D)), as well as the Arctic Slope Regional Corporation have been supporting the Willow project. As of March 2023, the Department of the Interior permitted ConocoPhillips to build a new ice road from the existing Kuparuk road system at Kuparuk River Oil Field drill site and use a partially grounded ice bridge across the Colville River near Ocean Point "to transport sealift modules" to the Willow project drilling area.: 3 As a final decision drew near, media attention and public interest increased dramatically, with a petition urging President Biden to "say no to the Willow Project", having been signed by more than 2.4 million people after widespread attention on TikTok. On March 13, 2023, the Biden administration approved the project. Secretary of the Interior Deb Haaland's name did not appear on the approval; deputy secretary Tommy Beaudreau, who acted as the point person on the project for the department, signed the final document. In response, environmental groups announced their plans to sue.On March 14, 2023, environmentalist organization Earthjustice filed a lawsuit on behalf of conservation groups to stop the Willow project. Activists say that the approval of a new carbon pollution source contradicts President Joe Biden's promises to slash greenhouse gas emissions in half by 2030 and transition the United States to clean energy. Some activists have characterized the project as a carbon bomb. In a second lawsuit, on the same day the Natural Resources Defense Council, Center for Biological Diversity, Greenpeace and others asked the federal Alaska court to vacate the approval. Conoco immediately started building the ice road, as construction is only possible in the winter, and in April 2023 an appeals court denied an injunction. In August 2023, a college student from Gen-Z for Change protested against the Biden administration's approval of the Willow Project at a White House press event and a video of this event was viewed 10 million times.In September 2023, Biden cancelled oil and gas leases in the Arctic National Wildlife Refuge, but not for the Willow project. Environmental justice In the final SEIS from February 2023, the BLM predicted adverse effects on public health,: 420–27  the subsistence: 373, 425, 439  and sociocultural system.: 439  The Nuiqsut population would be disproportionately affected with decreased food resource availability, decreased access to harvesting and increased food insecurity.: 439  It found the project would also adversely impact other Native American communities in Utqiaġvik, Anaktuvuk Pass, and Atqasuk. The project could produce up to 600 million barrels of oil and 287 million tons of carbon emissions plus other greenhouse gases over 30 years. The BLM assessments predict the project will adversely impact Arctic wildlife and Native American communities. The Willow project would damage the complex local tundra ecosystem and, according to an older government estimate, release the same amount of greenhouse gases annually as half a million homes.In June 2023, Conoco Philips received a $914,000 penalty for its handling of a “shallow underground blowout” of a nearby well in Alpine, Alaska in 2022, as gas was released uncontrollably at the surface for days across various locations. See also Exxon Valdez oil spill Prudhoe Bay oil spill References Further reading Friedman, Lisa (March 12, 2023). "Biden Administration Approves Huge Alaska Oil Project". The New York Times. ISSN 0362-4331. Retrieved March 17, 2023. "Could two lawsuits block the Willow Project in Alaska?". March 14, 2023. Retrieved March 17, 2023. "The Willow oil project debate comes down to this key climate change question". Washington Post. ISSN 0190-8286. Retrieved March 17, 2023. "Haaland criticized over 'difficult' choice on Willow project". Retrieved March 17, 2023. "What is the Willow project in Alaska? Controversial oil drilling plan explained". Retrieved March 17, 2023. Megerian, Chris; ago, Associated Press Updated: 21 hours ago Published: 21 hours. "Backlash over Willow oil project in Alaska strikes at Biden climate legacy". Retrieved March 17, 2023. "What is the controversy behind the Alaska Willow oil project?". March 13, 2023. Retrieved March 17, 2023. "Willow oil project approval intensifies Alaska Natives' rift". ABC News. Retrieved March 17, 2023. Webb, Romany (May 10, 2023). "Rethinking the Willow Project: Did BLM Have Other Options?". Retrieved September 9, 2023.
energy in the united kingdom
Energy in the United Kingdom came mostly from fossil fuels in 2021. Total energy consumption in the United Kingdom was 142.0 million tonnes of oil equivalent (1,651 TWh) in 2019. In 2014, the UK had an energy consumption per capita of 2.78 tonnes of oil equivalent (32.3 MWh) compared to a world average of 1.92 tonnes of oil equivalent (22.3 MWh). Demand for electricity in 2014 was 34.42 GW on average (301.7 TWh over the year) coming from a total electricity generation of 335.0 TWh. Successive UK governments have outlined numerous commitments to reduce carbon dioxide emissions. One such announcement was the Low Carbon Transition Plan launched by the Brown ministry in July 2009, which aimed to generate 30% electricity from renewable sources, and 40% from low-carbon content fuels by 2020. Notably, the UK is one of the best sites in Europe for wind energy, and wind power production is its fastest growing supply. Wind power contributed almost 21% of UK electricity generation in 2019. In 2019, the electricity sector's grid supply for the United Kingdom came from 43% fossil fuelled power (almost all from natural gas), 48.5% zero-carbon power (including 16.8% nuclear power and 26.5% from wind, solar and hydroelectricity), and 8% imports.Government commitments to reduce emissions are occurring against a backdrop of economic crisis across Europe. During the European financial crisis, Europe's consumption of electricity shrank by 5%, with primary production also facing a noticeable decline. Britain's trade deficit was reduced by 8% due to substantial cuts in energy imports. Between 2007 and 2015, the UK's peak electrical demand fell from 61.5 GW to 52.7. By 2022 it reached 47.1 GW.UK government energy policy aims to play a key role in limiting greenhouse gas emissions, whilst meeting energy demand. Shifting availabilities of resources and development of technologies also change the country's energy mix through changes in costs and consumption. In 2018, the United Kingdom was ranked sixth in the world on the Environmental Performance Index, which measures how well a country carries through environmental policy. Energy sources Oil After UK oil production peaked at nearly 3 million barrels per day in 1999, concerns over peak oil production were raised by high-profile voices in the United Kingdom such as David King and the Industry Task-Force on Peak Oil and Energy Security. The latter's 2010 report states that "The next five years will see us face another crunch – the oil crunch. This time, we do have the chance to prepare. The challenge is to use that time well." (Richard Branson and Ian Marchant). However, world peak oil production was not reached and instead the debate is about oil imports and when peak oil demand will be reached.In October 2022, it was confirmed that UK Prime Minister, Liz Truss, would be issuing hundreds of new oil and gas licenses. In the same month, Truss said she will not tax the profits of oil and gas corporations to pay for a freeze in energy bills. Natural gas United Kingdom produced 60% of its consumed natural gas in 2010. In five years the United Kingdom moved from almost gas self-sufficient (see North Sea gas) to 40% gas import in 2010. Gas was almost 40% of total primary energy supply (TPES) and electricity more than 45% in 2010. Underground storage was about 5% of annual demand and more than 10% of net imports. There is an alternative fuel obligation in the United Kingdom. (see Renewable Transport Fuel Obligation) Gasfields include Amethyst gasfield, Armada gasfield, Easington Catchment Area, East Knapton, Everest gasfield and Rhum gasfield. A gas leak occurred in March 2012 at the Elgin-Franklin fields, where about 200,000 cubic metres of gas was escaping every day. Total missed out on about £83 million of potential income. Coal Coal power in England and Wales has reduced substantially in the beginning of the twenty-first century. The power stations known as the Hinton Heavies closed, and coal is rarely used for power generation as of October 2023.Electricity production from coal in 2018 was less than any time since the industrial revolution, with the first "coal free day" in 2017 and the first coal free week in 2019. Coal supplied 5.4% of UK electricity in 2018, down from 7% in 2017, 9% in 2016, 23% in 2015 and 30% in 2014.The UK Government announced in November 2015 that all the remaining 14 coal-fired power stations would be closed by 2025. In February 2020, the government said that it would consult on bringing the closure date forward to 2024.As of October 2023 there is only one active coal-fired power plant left, Ratcliffe-on-Soar Power Station, which has a planned closure date of September 2024. Nuclear Britain's fleet of operational reactors consists of 10 advanced gas-cooled reactors at four discrete sites and one PWR unit at Sizewell B. The total installed nuclear capacity in the United Kingdom is about 6.8 GW. In addition, the UK experimented with Fast Breeder reactor technologies at Dounreay in Scotland; however the last fast breeder (with 250 MWe of capacity) was shut down in 1994.Even with changes to the planning system to speed nuclear power plant applications, there are doubts over whether the necessary timescale could be met to increase nuclear power output, and over the financial viability of nuclear power with present oil and gas prices. With no nuclear plants having been constructed since Sizewell B in 1995, there are also likely to be capacity issues within the native nuclear industry. The existing privatised nuclear supplier, British Energy, had been in financial trouble in 2004. In October 2010, the coalition British Government gave the go-ahead for the construction of up to eight new nuclear power plants. However, the Scottish Government, with the backing of the Scottish Parliament, has stated that no new nuclear power stations will be constructed in Scotland. Renewable energy In 2007, the United Kingdom Government agreed to an overall European Union target of generating 20% of the European Union's energy supply from renewable sources by 2020. Each European Union member state was given its own allocated target; for the United Kingdom it is 15%. This was formalised in January 2009 with the passage of the EU Renewables Directive. As renewable heat and fuel production in the United Kingdom are at extremely low bases, RenewableUK estimates that this will require 35–40% of the United Kingdom's electricity to be generated from renewable sources by that date, to be met largely by 33–35 GW of installed wind capacity. In the third quarter of 2019, renewables contributed towards 38.9% of the UK's electricity generation, producing 28.8 TWh of electricity.In June 2017, renewables plus nuclear generated more UK power than gas and coal together for the first time and new offshore wind power became cheaper than new nuclear power for the first time. Wind power In December 2007, the United Kingdom Government announced plans for a massive expansion of wind energy production, by conducting a Strategic Environmental Assessment of up to 25 GW worth of wind farm offshore sites in preparation for a new round of development. These proposed sites were in addition to the 8 GW worth of sites already awarded in the two earlier rounds of site allocations, Round 1 in 2001 and Round 2 in 2003. Taken together it was estimated that this would result in the construction of over 7,000 offshore wind turbines.Wind power delivers a growing fraction of the energy in the United Kingdom and at the beginning of November 2018, wind power in the United Kingdom consisted of nearly 10,000 wind turbines with a total installed capacity of just over 20 gigawatts: 12,254 MW of onshore capacity and 7,897 MW of offshore capacity.In August and September 2021, the UK had to restart coal plants, amidst a lack of wind, as power imports from Europe were insufficient to satisfy demand. Solar At the end of 2011, there were 230,000 solar power projects in the United Kingdom, with a total installed generating capacity of 750 MW. By February 2012 the installed capacity had reached 1,000 MW. Solar power use has increased very rapidly in recent years, albeit from a small base, as a result of reductions in the cost of photovoltaic (PV) panels, and the introduction of a Feed-in tariff (FIT) subsidy in April 2010. In 2012, the government said that 4 million homes across the UK will be powered by the sun within eight years, representing 22,000 MW of installed solar power capacity by 2020. Biofuels Gas from sewage and landfill (biogas) has already been exploited in some areas. In 2004 it provided 129.3 GW·h (up 690% from 1990 levels), and was the UK's leading renewable energy source, representing 39.4% of all renewable energy produced (including hydro) in 2006. The UK has committed to a target of 10.3% of renewable energy in transport to comply with the Renewable Energy Directive of the European Union but has not yet implemented legislation to meet this target. Other biofuels can provide a close-to-carbon-neutral energy source, if locally grown. In South America and Asia, the production of biofuels for export has in some cases resulted in significant ecological damage, including the clearing of rainforest. In 2004 biofuels provided 105.9 GW·h, 38% of it wood. This represented an increase of 500% from 1990.The UK is importing large quantities of wood pellets from the United States, replacing coal at several generating stations. Geothermal power Investigations into the exploitation of Geothermal power in the United Kingdom, prompted by the 1973 oil crisis, were abandoned as fuel prices fell. Only one scheme is operational, the Southampton District Energy Scheme. In 2004, it was announced that a further scheme would be built to heat the UK's first geothermal energy model village near Eastgate, County Durham. Hydroelectric As of 2012, hydroelectric power stations in the United Kingdom accounted for 1.67 GW of installed electrical generating capacity, being 1.9% of the UK's total generating capacity and 14% of UK's renewable energy generating capacity. Annual electricity production from such schemes is approximately 5,700 GWh, being about 1.5% of the UK's total electricity production.There are also pumped-storage power stations in the UK. These power stations are net consumers of electrical energy however they contribute to balancing the grid, which can facilitate renewable generation elsewhere, for example by 'soaking up' surplus renewable output at off-peak times and release the energy when it is required. Electricity sector History During the 1940s, some 90% of the electricity generation was by coal, with oil providing most of the remainder. With the development of the national grid, the switch to using electricity, United Kingdom electricity consumption increased by around 150% between the post-war nationalisation of the industry in 1948 and the mid-1960s. During the 1960s, growth slowed as the market became saturated. The United Kingdom is planning to reform its electricity market, see also Decarbonisation measures in proposed UK electricity market reform. It plans to introduce a capacity mechanism and contracts for difference to encourage the building of new generation.The United Kingdom started to develop nuclear power capacity in the 1950s, with Calder Hall nuclear power station being connected to the grid on 27 August 1956. Though the production of weapons-grade plutonium was the main reason behind this power station, other civil stations followed, and 26% of the nation's electricity was generated from nuclear power at its peak in 1997. Despite the flow of North Sea oil from the mid-1970s, electricity generation from oil remained relatively small and continued to decline. Starting in 1993, and continuing through the 1990s, a combination of factors led to a so-called Dash for Gas, during which the use of coal was scaled back in favour of gas-fuelled generation. This was sparked by the privatisation of the National Coal Board, British Gas and the Central Electricity Generating Board; the introduction of laws facilitating competition within the energy markets; and the availability of cheap gas from the North Sea. In 1990, just 1.09% of all gas consumed in the country was used in electricity generation; by 2004 the figure was 30.25%.By 2004, coal use in power stations had fallen to 50.5 million tonnes, representing 82.4% of all coal used in 2004 (a fall of 43.6% compared to 1980 levels), though up slightly from its low in 1999. On several occasions in May 2016, Britain burned no coal for electricity for the first time since 1882. On 21 April 2017, Britain went a full day without using coal power for the first time since the Industrial Revolution, according to the National Grid.From the mid-1990s, new renewable energy sources began to contribute to the electricity generated, adding to a small hydroelectricity generating capacity. Electricity generation In 2020, total electricity production stood at 312 TWh (down from a peak of 385 TWh in 2005), generated from the following sources: Gas: 35.7% (was 0.05% in 1990) Nuclear: 16.1% (19% in 1990) Wind: 24.2% (0% in 1990), of which:Onshore Wind: 11.1% Offshore Wind: 13%Coal: 1.8% (67% in 1991) Bio-Energy: 12.6% (0% in 1990) Solar: 4.2% (0% in 1990) Hydroelectric: 2.2% (2.6% in 1990) Oil and other: 3.3% (12% in 1990)The UK energy policy had targeted a total contribution from renewable energy to achieve 10% by 2010, but it was not until 2012 that this figure was exceeded; renewable energy sources supplied 11.3% (41.3 TWh) of the electricity generated in the United Kingdom in 2012. The Scottish Government had a target of generating 17% to 18% of Scotland's electricity from renewables by 2010, rising to 40% by 2020. Regional differences While in some ways limited by which powers are devolved, the four nations of the United Kingdom have different energy mixes and ambitions. Scotland currently has a target of 80% of electricity from renewables by 2020, which was increased from an original ambition of 50% by 2020 after it exceeded its interim target of 31 percent by 2011. Scotland has most of the UK's hydro-electric power generation facilities. It has a quarter of the EU's estimated offshore wind potential, and is at the forefront of testing various marine energy systems. Cogeneration Combined heat and power (CHP) plants, where 'waste' hot water from generating is used for district heating, are also a well tried technology in other parts of Europe. While it heats about 50% of all houses in Denmark, Finland, Poland, Sweden and Slovakia, it currently only plays a small role in the United Kingdom. It has, however, been rising, with total generation standing at 27.9 TWh by 2008. This consisted of 1,439 predominantly gas-fired schemes with a total CHP electrical generating capacity of 5.47 GW, and contributing 7% of the UK's electricity supply. Heat generation utilisation has fallen however from a peak of 65 TWh in 1991 to 49 TWh in 2012. Energy research Historically, public sector support for energy research and development in the United Kingdom has been provided by a variety of public and private sector bodies. The Engineering and Physical Sciences Research Council funds an energy programme spanning energy and climate change research. It aims to "develop, embrace and exploit sustainable, low carbon and/or energy efficient technologies and systems" to enable the United Kingdom "to meet the Government’s energy and environmental targets by 2020". Its research includes renewable, conventional, nuclear and fusion electricity supply as well as energy efficiency, fuel poverty and other topics. Since being established in 2004, the UK Energy Research Centre carries out research into demand reduction, future sources of energy, infrastructure and supply, energy systems, sustainability and materials for advanced energy systems. The Energy Technologies Institute, set up to 'accelerate the development of secure, reliable and cost-effective low-carbon energy technologies towards commercial deployment', began its work in 2007 and is due to close at the end of 2019.In relation to buildings, the Building Research Establishment carries out some research into energy conservation. There is currently international research being conducted into fusion power. The ITER reactor is currently being constructed at Cadarache in France. The United Kingdom contributed towards this project through membership of the European Union. Prior to this, an experimental fusion reactor (the Joint European Torus) had been built at Culham in Oxfordshire. Energy efficiency The United Kingdom government has instituted several policies intended to promote an increase in efficient energy use. These include the roll out of smart meters, the Green Deal, the CRC Energy Efficiency Scheme, the Energy Savings Opportunity Scheme and Climate Change Agreements. In tackling the energy trilemma, saving energy is the cheapest of all measures. Improving home insulation helps reduce fossil gas imports. Climate change The Committee on Climate Change publishes an annual progress report in respect to control the climate change in the United Kingdom. Scotland cut greenhouse gas emissions by around 46% between 1990 and 2014. Scotland aims to have a carbon-free electricity sector based on renewable energy sources by 2032. Scotland also aims to repair 250,000 hectares (620,000 acres; 2,500 km2) of degraded peatlands, which store a total of 1.7 gigatonnes of CO2.Since 2013, an Energy Company Obligation (ECO) levy on electricity has been in effect. As of 2022, the levy generates around 1 billion pounds. See also A Green New Deal Compulsory stock obligation Energy policy of the United Kingdom Energy conservation in the United Kingdom Energy switching services in the UK Greenhouse gas emissions by the United Kingdom 2021 United Kingdom fuel supply crisis 2021 United Kingdom natural gas supplier crisis References External links UK Energy Research Centre Map of United Kingdom power stations Energy Analyses in UK Map of the UK oil and gas infrastructure ENTSO-E Transparency Platform
special report on emissions scenarios
The Special Report on Emissions Scenarios (SRES) is a report by the Intergovernmental Panel on Climate Change (IPCC) that was published in 2000. The greenhouse gas emissions scenarios described in the Report have been used to make projections of possible future climate change. The SRES scenarios, as they are often called, were used in the IPCC Third Assessment Report (TAR), published in 2001, and in the IPCC Fourth Assessment Report (AR4), published in 2007. The SRES scenarios were designed to improve upon some aspects of the IS92 scenarios, which had been used in the earlier IPCC Second Assessment Report of 1995. The SRES scenarios are "baseline" (or "reference") scenarios, which means that they do not take into account any current or future measures to limit greenhouse gas (GHG) emissions (e.g., the Kyoto Protocol to the United Nations Framework Convention on Climate Change).Emissions projections of the SRES scenarios are broadly comparable in range to the baseline emissions scenarios that have been developed by the scientific community. The SRES scenarios, however, do not encompass the full range of possible futures: emissions may change less than the scenarios imply, or they could change more.SRES was superseded by Representative Concentration Pathways (RCPs) in the IPCC fifth assessment report in 2014. There have been a number of comments on the SRES. It has been called "a substantial advance from prior scenarios". At the same time, there have been criticisms of the SRES. The most prominently publicized criticism of SRES focused on the fact that all but one of the participating models compared gross domestic product (GDP) across regions using market exchange rates (MER), instead of the more correct purchasing-power parity (PPP) approach. Purpose Because projections of climate change depend heavily upon future human activity, climate models are run against scenarios. There are 40 different scenarios, each making different assumptions for future greenhouse gas pollution, land-use and other driving forces. Assumptions about future technological development as well as the future economic development are thus made for each scenario. Most include an increase in the consumption of fossil fuels; some versions of B1 have lower levels of consumption by 2100 than in 1990. Overall global GDP will grow by a factor of between 5–25 in the emissions scenarios. These emissions scenarios are organized into families, which contain scenarios that are similar to each other in some respects. IPCC assessment report projections for the future are often made in the context of a specific scenario family. According to the IPCC, all SRES scenarios are considered "neutral". None of the SRES scenarios project future disasters or catastrophes, e.g., wars and conflicts, and/or environmental collapse.The scenarios are not described by the IPCC as representing good or bad pathways of future social and economic development. Scenario families Scenario families contain individual scenarios with common themes. The six families of scenarios discussed in the IPCC's Third Assessment Report (TAR) and Fourth Assessment Report (AR4) are A1FI, A1B, A1T, A2, B1, and B2. The IPCC did not state that any of the SRES scenarios were more likely to occur than others, therefore none of the SRES scenarios represent a "best guess" of future emissions.Scenario descriptions are based on those in AR4, which are identical to those in TAR. A1 The A1 scenarios are of a more integrated world. The A1 family of scenarios is characterized by: Rapid economic growth. A global population that reaches 9 billion in 2050 and then gradually declines. The quick spread of new and efficient technologies. A convergent world - income and way of life converge between regions. Extensive social and cultural interactions worldwide.There are subsets to the A1 family based on their technological emphasis: A1FI - An emphasis on fossil-fuels (Fossil Intensive). A1B - A balanced emphasis on all energy sources. A1T - Emphasis on non-fossil energy sources..... A2 The A2 scenarios are of a more divided world. The A2 family of scenarios is characterized by: A world of independently operating, self-reliant nations. Continuously increasing population. Regionally oriented economic development. High emissions B1 The B1 scenarios are of a world more integrated, and more ecologically friendly. The B1 scenarios are characterized by: Rapid economic growth as in A1, but with rapid changes towards a service and information economy. Population rising to 9 billion in 2050 and then declining as in A1. Reductions in material intensity and the introduction of clean and resource efficient technologies. An emphasis on global solutions to economic, social and environmental stability. B2 The B2 scenarios are of a world more divided, but more ecologically friendly. The B2 scenarios are characterized by: Continuously increasing population, but at a slower rate than in A2. Emphasis on local rather than global solutions to economic, social and environmental stability. Intermediate levels of economic development. Less rapid and more fragmented technological change than in A1 and B1. SRES scenarios and climate change initiatives While some scenarios assume a more environmentally friendly world than others, none include any climate-specific initiatives, such as the Kyoto Protocol. Atmospheric GHG concentrations The SRES scenarios have been used to project future atmospheric GHG concentrations. Under the six illustrative SRES scenarios, the IPCC Third Assessment Report (2001) projects the atmospheric concentration of carbon dioxide (CO2) in the year 2100 as between 540 and 970 parts per million (ppm). In this estimate, there are uncertainties over the future removal of carbon from the atmosphere by carbon sinks. There are also uncertainties regarding future changes in the Earth's biosphere and feedbacks in the climate system. The estimated effect of these uncertainties mean that the total projected concentration ranges from 490 to 1,260 ppm. This compares to a pre-industrial (taken as the year 1750) concentration of about 280 ppm, and a concentration of about 368 ppm in the year 2000. The United States Environmental Protection Agency has also produced projections of future atmospheric GHG concentrations using the SRES scenarios. These projections are shown opposite, and are subject to the uncertainty described earlier regarding the future role of carbon sinks and changes to the Earth's biosphere. Observed emissions rates Between the 1990s and 2000s, the growth rate in CO2 emissions from fossil fuel burning and industrial processes increased (McMullen and Jabbour, 2009, p. 8). The growth rate from 1990 to 1999 averaged 1.1% per year. Between the years 2000–2009, growth in CO2 emissions from fossil fuel burning was, on average, 3% per year, which exceeds the growth estimated by 35 of the 40 SRES scenarios (34 if the trend is computed with end points instead of a linear fit). Human-caused greenhouse gas emissions set a record in 2010, a 6% jump on 2009 emissions, exceeding even the "worst case" scenario cited in the IPCC Fourth Assessment Report. Views and analysis MER and PPP The SRES scenarios were criticised by Ian Castles and David Henderson. The core of their critique was the use of market exchange rates (MER) for international comparison, in lieu of the theoretically favoured PPP exchange rate which corrects for differences in purchasing power. The IPCC rebutted this criticism.The positions in the debate can be summarised as follows. Using MER, the SRES scenarios overstate income differences in past and present, and overestimate future economic growth in developing countries. This, Castles and Henderson originally argued, leads to an overestimate of future greenhouse gas emissions. The IPCC future climate change projections would have been overestimated. However, the difference in economic growth is offset by a difference in energy intensity. Some say these two opposite effects fully cancel, some say this is only partial. Overall, the effect of a switch from MER to PPP is likely to have a minimal effect on carbon dioxide concentrations in the atmosphere. Castles and Henderson later accepted this and acknowledged that they were mistaken that future greenhouse gas emissions had been significantly overestimated.But even though global climate change is not affected, it has been argued that the regional distribution of emissions and incomes is very different between an MER and a PPP scenario. This would influence the political debate: in a PPP scenario, China and India have a much smaller share of global emissions. It would also affect vulnerability to climate change: in a PPP scenario, poor countries grow more slowly and would face greater impacts. Availability of fossil fuels As part of the SRES, IPCC authors assessed the potential future availability of fossil fuels for energy use. SRES assumptions about availability of fossil fuels is largely based on a 1997 study done by Rogner, who goes to great lengths to claim that there are enough fossil resources, i.e. hydrocarbon molecules in the crust, to theoretically sustain production for an extended period of time.The issue of whether or not the future availability of fossil fuels would limit future carbon emissions was considered in the Third Assessment Report; it concluded that limits on fossil fuel resources would not limit carbon emissions in the 21st century. Their estimate for conventional coal reserves was around 1,000 gigatonnes of carbon (GtC), with an upper estimate of between 3,500 and 4,000 GtC. This compares with cumulative carbon emissions up to the year 2100 of about 1,000 GtC for the SRES B1 scenario, and about 2,000 GtC for the SRES A1FI scenario. The carbon in proven conventional oil and gas reserves was estimated to be much less than the cumulative carbon emissions associated with atmospheric stabilization of CO2 concentrations at levels of 450 ppmv or higher. The Third Assessment Report suggested that the future makeup of the world's energy mix would determine whether or not greenhouse gas concentrations were stabilized in the 21st century. The future energy mix might be based more on the exploitation of unconventional oil and gas (e.g., oil sands, shale oil, tight oil, shale gas), or more on the use of non-fossil energy sources, like renewable energy. Total primary energy production from fossil fuels in the SRES outlooks range from a mere 50% increase from year 2010 in the B1 family to over 400% in the A1 family. Criticism Direct quote from abstract of Wang et al.: Climate projections are based on emission scenarios. The emission scenarios used by the IPCC and by mainstream climate scientists are largely derived from the predicted demand for fossil fuels, and in our view take insufficient consideration of the constrained emissions that are likely due to the depletion of these fuels. This persistent problem has been criticized for a long time as many assumptions used for fossil fuel availability and future production have been optimistic at best and implausible at worst. The SRES and RCP scenarios have been criticized for being biased towards "exaggerated resource availability" and making "unrealistic expectations on future production outputs from fossil fuels. Energy cannot be seen as a limitless input to economic/climate models and remain disconnected from the physical and logistical realities of supply.A recent meta-analysis of the fossil energy outlooks used for climate change scenarios even identified a "return to coal hypothesis", as most mainstream climate scenarios foresee a significant increase in world coal production in the future. Patzek and Croft (2010, p. 3113) made a prediction of future coal production and carbon emissions. In their assessment, all but the lowest emission SRES scenarios projected far too high levels of future coal production and carbon emissions (Patzek and Croft, 2010, pp. 3113–3114). Similar results was reached by other long-term coal projectionsIn a discussion paper, Aleklett (2007, p. 17) viewed SRES projections between the years 2020 and 2100 as "absolutely unrealistic". In Aleklett's analysis, emissions from oil and gas were lower than all of the SRES projections, with emissions from coal much lower than the majority of SRES projections (Aleklett, 2007, p. 2). Select Committee report In 2005, the UK Parliament's House of Lords Economics Affairs Select Committee produced a report on the economics of climate change. As part of their inquiry, they took evidence on criticisms of the SRES. Among those who gave evidence to the committee were Dr Ian Castles, a critic of the SRES scenarios, and Prof Nebojsa Nakicenovic, who co-edited the SRES. IPCC author Dr Chris Hope commented on the SRES A2 scenario, which is one of the higher emissions scenarios of the SRES. Hope assessed and compared the marginal damages of climate change using two versions of the A2 scenario. In one version of the A2 scenario, emissions were as the IPCC projected. In the other version of A2, Hope reduced the IPCC's projected emissions by a half (i.e., 50% of the original A2 scenario). In his integrated assessment model, both of these versions of the A2 scenario lead to almost identical estimates of marginal climate damages (the present-day value of emitting one tonne of CO2 into the atmosphere). Based on this finding, Hope argued that present day climate policy was insensitive to whether or not you accepted the validity of the higher emission SRES scenarios. IPCC author Prof Richard Tol commented on the strengths and weaknesses of the SRES scenarios. In his view, the A2 SRES marker scenario was, by far, the most realistic. UK Government departments Defra and HM Treasury argued that case for action on climate change was not undermined by the Castles and Henderson critique of the SRES scenarios. They also commented that unless effective action was taken to curb emissions growth, other bodies, like the International Energy Agency, expected greenhouse gas emissions to continue to rise into the future. Comparison with a “no policy” scenario In a report published by the MIT Joint Program on the Science and Policy of Global Change, Webster et al. (2008) compared the SRES scenarios with their own “no policy” scenario. Their no-policy scenario assumes that in the future, the world does nothing to limit greenhouse gas emissions. They found that most of the SRES scenarios were outside of the 90% probability range of their no-policy scenario (Webster et al., 2008, p. 1). Most of the SRES scenarios were consistent with efforts to stabilize greenhouse gas concentrations in the atmosphere. Webster et al. (2008, p. 54) noted that the SRES scenarios were designed to cover most of the range of future emission levels in the published scientific literature. Many such scenarios in the literature presumably assumed that future efforts would be made to stabilize greenhouse gas concentrations. Post-SRES projections As part of the IPCC Fourth Assessment Report, the literature on emissions scenarios was assessed. Baseline emissions scenarios published since the SRES were found to be comparable in range to those in the SRES. IPCC (2007) noted that post-SRES scenarios had used lower values for some drivers for emissions, notably population projections. However, of the assessed studies that had incorporated new population projections, changes in other drivers, such as economic growth, resulted in little change in overall emission levels. Succession In IPCC Fifth Assessment Report released in 2014, SRES projections were superseded by Representative Concentration Pathways (RCPs) models. See also SRES scenarios on IPCC Server as Excel Spreadsheet General circulation model References Sources IPCC TAR WG3 (2001), Metz, B.; Davidson, O.; Swart, R.; Pan, J. (eds.), Climate Change 2001: Mitigation, Contribution of Working Group III to the Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 0-521-80769-7 (pb: 0-521-01502-2). IPCC TAR SYR (2001), Watson, R. T.; the Core Writing Team (eds.), Climate Change 2001: Synthesis Report, Contribution of Working Groups I, II, and III to the Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 0 521 80770 0 (pb: 0-521-01507-3). IPCC AR4 WG1 (2007), Solomon, S.; Qin, D.; Manning, M.; Chen, Z.; Marquis, M.; Averyt, K.B.; Tignor, M.; Miller, H.L. (eds.), Climate Change 2007: The Physical Science Basis, Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 978-0-521-88009-1 (pb: 978-0-521-70596-7). IPCC AR4 WG2 (2007), Parry, M.L.; Canziani, O.F.; Palutikof, J.P.; van der Linden, P.J.; Hanson, C.E. (eds.), Climate Change 2007: Impacts, Adaptation and Vulnerability, Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 978-0-521-88010-7 (pb: 978-0-521-70597-4). IPCC AR4 WG3 (2007), Metz, B.; Davidson, O.R.; Bosch, P.R.; Dave, R.; Meyer, L.A. (eds.), Climate Change 2007: Mitigation of Climate Change, Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 978-0-521-88011-4 (pb: 978-0-521-70598-1). IPCC AR4 SYR (2007), Pachauri, R. K.; Reisinger, A.; et al. (eds.), Climate Change 2007: Synthesis Report, Contribution of Working Groups I, II and III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, IPCC, ISBN 92-9169-122-4. IPCC SRES (2000), Nakićenović, N.; Swart, R. (eds.), Special Report on Emissions Scenarios: A special report of Working Group III of the Intergovernmental Panel on Climate Change (book), Cambridge University Press, ISBN 0-521-80081-1, 978-052180081-5 (pb: 0-521-80493-0, 978-052180493-6). IPCC SRES SPM (2000), "Summary for Policymakers" (PDF), Emissions Scenarios: A Special Report of IPCC Working Group III (PDF), IPCC, ISBN 92-9169-113-5. SRES data (2000), SRES Final Data (version 1.1, July 2000), Center for International Earth Science Information Network. Parson, E.; et al. (July 2007), Global Change Scenarios: Their Development and Use. Sub-report 2.1B of Synthesis and Assessment Product 2.1 by the U.S. Climate Change Science Program and the Subcommittee on Global Change Research, Washington, DC: Department of Energy, Office of Biological & Environmental Research, archived from the original on 30 June 2013 External links Report website Terms of reference "What is an emission scenario?" by Jean-Marc Jancovici
landfill gas
Landfill gas is a mix of different gases created by the action of microorganisms within a landfill as they decompose organic waste, including for example, food waste and paper waste. Landfill gas is approximately forty to sixty percent methane, with the remainder being mostly carbon dioxide. Trace amounts of other volatile organic compounds (VOCs) comprise the remainder (<1%). These trace gases include a large array of species, mainly simple hydrocarbons.Landfill gases have an influence on climate change. The major components are CO2 and methane, both of which are greenhouse gases. Methane in the atmosphere is a far more potent greenhouse gas, with each molecule having twenty-five times the effect of a molecule of carbon dioxide. Methane itself however accounts for less composition of the atmosphere than does carbon dioxide. Landfills are the third-largest source of methane in the US.Because of the significant negative effects of these gases, regulatory regimes have been set up to monitor landfill gas, reduce the amount of biodegradable content in municipal waste, and to create landfill gas utilization strategies, which include gas flaring or capture for electricity generation. Production Landfill gases are the result of three processes: evaporation of volatile organic compounds (e.g., solvents) chemical reactions between waste components microbial action, especially methanogenesis.The first two depend strongly on the nature of the waste. The dominant process in most landfills is the third process whereby anaerobic bacteria decompose organic waste to produce biogas, which consists of methane and carbon dioxide together with traces of other compounds. Despite the heterogeneity of waste, the evolution of gases follows well defined kinetic pattern. Formation of methane and CO2 commences about six months after depositing the landfill material. The evolution of gas reaches a maximum at about 20 years, then declines over the course of decades.Conditions and changes within the landfill can be observed with electrical resistivity tomography (ERT) to detect sources of landfill gas, and leachate movements and pathways. Conditions at different locations, such as temperature, moisture levels and fraction of biodegradable material material can be inferred and this information can be used to improve gas production with optimal well locations over hotspots and interventions such as heap irrigation. When landfill gas permeates through a soil cover, a fraction of the methane in the gas is oxidized microbially to CO2. Monitoring Because gases produced by landfills are both valuable and sometimes hazardous, monitoring techniques have been developed. Flame ionization detectors can be used to measure methane levels as well as total VOC levels. Surface monitoring and sub-surface monitoring as well as monitoring of the ambient air is carried out. In the U.S., under the Clean Air Act of 1990, it is required that many large landfills install gas collection and control systems, which means that at the very least the facilities must collect and flare the gas. U.S. Federal regulations under Subtitle D of RCRA formed in October 1979 regulate the siting, design, construction, operation, monitoring, and closure of MSW landfills. Subtitle D now requires controls on the migration of methane in landfill gas. Monitoring requirements must be met at landfills during their operation, and for an additional 30 years after. The landfills affected by Subtitle D of RCRA are required to control gas by establishing a way to check for methane emissions periodically and therefore prevent off-site migration. Landfill owners and operators must make sure the concentration of methane gas does not exceed 25% of the LEL for methane in the facilities' structures and the LEL for methane at the facility boundary. Use The gases produced within a landfill can be collected and used in various ways. The landfill gas can be utilized directly on-site by a boiler or any type of combustion system, providing heat. Electricity can also be generated on-site through the use of microturbines, steam turbines, or fuel cells. The landfill gas can also be sold off-site and sent into natural gas pipelines. This approach requires the gas to be processed into pipeline quality, e.g., by removing various contaminants and components. Landfill gas can also be used to evaporate leachate, another byproduct of the landfill process. This application displaces another fuel that was previously used for the same thing.The efficiency of gas collection at landfills directly impacts the amount of energy that can be recovered - closed landfills (those no longer accepting waste) collect gas more efficiently than open landfills (those that are still accepting waste). A comparison of collection efficiency at closed and open landfills found about a 17 percentage point difference between the two. Opposition Capture and use of landfill gas can be expensive. Some environmental groups claim that the projects do not produce "renewable power" because trash (their source) is not renewable. The Sierra Club opposes government subsidies for such projects. The Natural Resources Defense Council (NRDC) argues that government incentives should be directed more towards solar, wind, and energy-efficiency efforts. Safety Landfill gas emissions can lead to environmental, hygiene and security problems in the landfill. Several accidents have occurred, for example at Loscoe, England in 1986, where migrating landfill gas accumulated and partially destroyed a property. An accident causing two deaths occurred from an explosion in a house adjacent to Skellingsted landfill in Denmark in 1991. Due to the risk presented by landfill gas, there is a clear need to monitor gas produced by landfills. In addition to the risk of fire and explosion, gas migration in the subsurface can result in contact with landfill gas with groundwater. This, in turn, can result in contamination of groundwater by organic compounds present in nearly all landfill gas.Although usually evolved only in trace amounts, landfills do release some aromatics and chlorocarbons. Landfill gas migration, due to pressure differentials and diffusion, can occur. This can create an explosion hazard if the gas reaches sufficiently high concentrations in adjacent buildings. By country Brazil United States See also Anaerobic digestion Biodegradability Biogas Flue gas Landfill gas utilization Relative cost of electricity generated by different sources Underground coal gasification References External links GA Mansoori, N Enayati, LB Agyarko (2016), Energy: Sources, Utilization, Legislation, Sustainability, Illinois as Model State, World Sci. Pub. Co., ISBN 978-981-4704-00-7 "Primer on Landfill Gas as "Green Energy"". Energy Justice Network. Retrieved 2010-04-25. Koch, Wendy (2010-02-25). "Landfill Projects on the rise". USA Today. Retrieved 2010-04-25. "Landfill Gas to Energy". Waste Management. Retrieved 2010-04-26. "Landfill Gas". Gas Separation Technology LLC. Archived from the original on 2017-05-06. Retrieved 2010-04-26. "Landfill Gas Control Measures". Agency for Toxic Substances & Disease Registry. Retrieved 2010-04-26.
electricity sector in turkey
Turkey uses more electricity per person than the global average, but less than the European average, with demand peaking in summer due to air conditioning. Most electricity is generated from coal, gas and hydropower, with hydroelectricity from the east transmitted to big cities in the west. Electricity prices are state-controlled, but wholesale prices are heavily influenced by the cost of imported gas. Each year, about 300 terawatt-hours (TWh) of electricity is used, which is almost a quarter of the total energy used in Turkey. On average, about four hundred grams of carbon dioxide is emitted per kilowatt-hour of electricity generated (400 gCO2/kWh); this carbon intensity is slightly less than the global average. As there is 100 GW of generating capacity, far more electricity could be produced. Although only a tiny proportion is exported; consumption is forecast to increase, and there are plans for more exports during the 2020s. Turkey's coal-fired power stations are the largest source of the country's greenhouse-gas emissions. Many brown coal power stations are subsidized, which increases air pollution. Imports of gas, mostly for Turkey's power stations, are one of the main expenses for the country. In winter, electricity generation is vulnerable to reductions in the gas supply from other countries. Solar and wind power are now the cheapest generators of electricity, and more of both are being built. If enough solar and wind power is built, the country's hydroelectric plants should be enough to cover windless cloudy weeks. Renewables generate a third of the country's electricity, and academics have suggested that the target of 32% renewable energy by 2030 be increased to 50%, and that coal power should be phased out by the mid-2030s. Increased use of electric vehicles is expected to increase electricity demand. Consumption Each year, about 300 TWh of electricity is used in Turkey: this supplies almost a quarter of the total final energy demand,: 19  the rest being from coal, oil and gas. Due to air conditioning demand peaks in summer: with August highest (32 TWh in 2021) and February typically lowest (24 TWh in 2021). Total national consumption divided by the population is under 4,000 kWh a year, much below the average of around 10,000 kWh a year for other OECD countries in Europe,: 17  but half as much again as the global average. Shares of energy usage in 2019 totaled 45% for industry, 29% for services and 21% for households.: 16  Consumption is forecast to increase.As of 2021, household electricity consumption is estimated to average 230 kWh a month and is dominated by refrigerators, followed by televisions then washing machines. Space heating and electric vehicles have the biggest potential for demand side response.: 51 Between 2019 and 2024, Turkey plans to invest US$11 billion into energy efficiency; and by 2035 replace 80% of electricity meters with smart meters. Electricity's share of energy consumption is expected to increase, from 22% in 2019 to perhaps 28% in 2040, partly due to electrification of road transport. Demand forecasts Demand forecasting is important, because constructing too much electricity generation capacity can be expensive, both for government energy subsidies and private sector debt interest. Conversely, constructing too little risks delaying the health benefits of electrification, the biggest of which is cleaner air due to fossil fuel phase-out.Distribution companies, some retail companies, and industrial zones send their demand forecasts to the Energy Ministry and the Turkish Electricity Transmission Corporation (TEIAŞ) every year.: 21  TEİAŞ then publishes low, base and high 10 year forecasts,: 21  using the "DECADES" model; whereas the Energy Ministry uses the "Model for Analysis of Energy Demand". Some official demand forecasts are overestimated, which could be due to low economic growth. In 2019 actual generation was 76% of firm capacity, and overcapacity continued into the early 2020s. Industry The share of electricity used in industry is expected to increase at the expense of the fossil fuel share as Turkey moves to more technology manufacturing.: 343  Less coal is being burnt for industry and oil burning remains static.: 343  One projection even shows electricity overtaking gas to become the largest industrial energy source at 30%,: 343  however more efficient lighting and industrial motors, together with policy changes supporting efficiency, could limit demand growth.: 340 Electrification of transport In 2021, less than 3000 fully electric cars were sold, however production and use of some types of electric vehicles, such as cars manufactured by Togg, may increase demand during the 2020s.: 10  Shura Energy Transition Center, a think tank, has recommended to automatically charge electric cars when plenty of wind and solar power is available.: 19  The architecture of Turkey means that many city dwellers live in apartment blocks without off-street parking: regulations require at least one charger per 50 new parking spaces in shopping malls and public parking lots. Getting old diesel cars and trucks off the road would have health and environmental benefits, but this would require new pollution control legislation, and as of 2021 the only commercial electric vehicles planned for mass production are vans. The government aims to end sales of fossil fuel cars and lorries by 2040. Ford hopes to build a factory to make batteries for commercial electric vehicles. Generation Of the total 329 TWh of electricity generated in 2021; natural gas produced 42%, coal 26%, hydropower 13%, and wind 10%. Installed capacity reached 100 GW in 2022. Academics have suggested that the target of 32% from renewables by 2030 should be increased to at least 50%. The state-owned Electricity Generation Company (EÜAŞ) has about 20% of the market,: 8  and there are many private companies. The carbon intensity of generation during the 2010s was slightly over 400 gCO2/kWh, around the global average. Coal Gas In 2020, power plants consumed 29% of natural gas in Turkey. State-owned gas-fired power plants are less efficient than private plants, but can out-compete them, as the state guarantees a price for their electricity. Gas power plants are used more when drought reduces hydropower, such as in 2021 which was a record year for gas consumption. The National Energy Plan published in 2023 forecasts 10 GW more gas power plants will be built. Hydropower Hydropower is a critical source of electricity, and in some years substantial amounts can be generated due to Turkey's mountainous landscape, abundance of rivers, and it being surrounded by three seas. The main river basins are the Euphrates and the Tigris. Many dams have been built throughout the country, and a peak of 28 GW of power can be generated by hydroelectric plants. Almost 90 TWh was generated in 2019, around 30% of the country's electricity. There are many policies that support hydroelectricity. Construction of some dams has been controversial for various reasons: for example environmentalists claiming they damage wildlife such as fish, or downstream countries complaining of reduced water flow. Due to changes in rainfall, generation varies considerably from year to year. And, according to S&P Global Platts, when there is drought in Turkey during the peak electricity demand month of August the aim of the State Hydraulic Works to conserve water for irrigation can conflict with the Turkish Electricity Transmission Corporation aiming to generate electricity. Despite droughts increasing due to climate change, hydropower is predicted to remain important for load balancing.: 72  Converting existing dams to pumped storage has been suggested as more feasible than new pumped storage. Wind Solar Turkey is located in an advantageous position in the Middle East and Southeast Europe for solar energy, and it is a growing part of renewable energy in the country, with almost 8 GW generating about 4% of the country's electricity. Solar potential is high in Turkey, especially in the south-east and Mediterranean provinces. Conditions for solar power generation are comparable to Spain. In 2020 Turkey ranked 8th in Europe for solar power,: 49  but it could increase far more quickly if subsidies for coal were abolished and the auction system was improved. Every gigawatt of solar power installed would save over US$100 million in gas import costs.Peak daily generation in 2020 was over 1 TWh in September. According to modelling by Carbon Tracker, new solar power became cheaper than new coal power in 2020, and will become cheaper than existing coal plants in 2023. According to think tank Ember, building new solar and wind power in Turkey is cheaper than running existing coal plants which depend on imported coal. But they say that there are obstacles to building utility-scale solar, such as: lack of new capacity for solar power at transformers, a 50 MW cap on any single solar power plant’s installed capacity, and large consumers being unable to sign long term power purchase agreements for new solar installations. Unlicensed power plants, which are mostly solar, generated about 4% of electricity in 2021.: 13 Geothermal There are almost 2 gigawatts of electrical geothermal power in Turkey, which is a significant part of renewable energy in Turkey. Geothermal power in Turkey began in the 1970s, in a prototype plant, following systematic exploration of geothermal fields. In the 1980s the pilot facility became the first geothermal power plant. The small-sized geothermal power plant was expanded to the country's biggest in 2013. Over 60 power plants operate in Turkey as of 2020, with potential for more. As well as contributing to electricity generation, geothermal energy is also used in direct heating applications. At the end of 2021 Turkey had 1.7 GW installed capacity, the fourth largest in the world after the United States, Indonesia and the Philippines.There is almost 2 GW of geothermal and sites for much more including enhanced geothermal systems. However carbon dioxide emissions can be high, especially for new plants, so to prevent carbon dioxide dissolved out of the rocks being released into the atmosphere the fluid is sometimes completely reinjected after its heat is used. Nuclear Turkey's first nuclear power plant, at Akkuyu, is planned to start generation in 2023, and is expected to last for at least 60 years. The nuclear power debate has a long history, with the 2018 construction start in Mersin Province being the sixth major attempt to build a nuclear power plant since 1960. Nuclear power has been criticised, as being very expensive to taxpayers.Plans for a nuclear power plant at Sinop and another at İğneada have stalled. Hybrid, distributed and virtual generation Hybrid generation became more popular in the early 2020s. If distributed generation installed power is under 11 kW, it is only allowed to be connected to the low voltage network, not the high voltage network. The first virtual power plant was created in 2017 with wind, solar and hydropower; and geothermal was added in 2020. Transmission and storage The transmission system operator is the Turkish Electricity Transmission Corporation (TEİAŞ), which is a state-owned monopoly as of 2022.: 11  It is planned to sell a minority share to the private sector in 2022. Transmission is regulated by the Energy Market Regulatory Authority (EMRA). The first long-distance transmission line was from Zonguldak to Istanbul in 1952, and as of 2021 there are 72,000 km. The grid runs at 400 kV and 154 kV, and there are over 700 transmission grid substations. Transmission costs, including losses and operation costs, are shared equally between producer and consumer.: 70 Reducing grid losses and outages is important, as is improving grid quality. Power consumption is often distant from generation, so grid improvements are needed to prevent bottlenecks and increase flexibility. There are 11 international interconnectors, including all of Turkey's neighbours by land except Armenia (although relations are improving). Although TEİAŞ is no longer an observer member of ENTSO-E it continues to attend technical discussions of working groups.: 105  As of 2020, links with the European Union allow 500 MW export and 650 MW import, whereas trade with other countries is possible but difficult to automate as they do not meet ENTSO-E synchronisation requirements. In 2020 total exports were 2.5 GWh, mostly to Greece, and imports 1.9 GWh, mostly from Bulgaria.: 39 According to a 2018 study by Sabancı University, 20% of Turkey's electricity may be generated from wind and solar by 2026 with no extra transmission costs, and 30% with a minor increase in grid investment. With the increase in electricity generated by solar panels, energy storage may become more important. A pumped hydropower plant is planned to be completed by 2022. Converting existing dams to pumped storage has been suggested as more feasible than new pumped storage. Mobile 10 MW batteries may be useful in the future for reducing temporary transmission congestion between regions, or larger ones for frequency regulation. Adding ice thermal storage to hypermarket cooling systems is estimated to be economically viable.The nationwide blackout in 2015 was not caused by a natural disaster, but by the limited capacity and lack of resilience of the main east-west connection whilst it was being maintenanced - leaving it unable to redistribute enough of the eastern hydroelectricity to the high consuming west. It did not greatly affect Van Province as it was supplied from Iran, and the EU interconnection helped restore power. More integration with other countries would increase resilience. New wind and solar in the west and centre of the country is closer to demand and is thus reducing the dependance on high voltage transmission. Distribution As part of electricity industry reforms between 2009 and 2013, the ownership of all electricity distribution infrastructure was retained by state owned Turkish Electricity Distribution Corporation (TEDAŞ), but responsibility for operation, maintenance and new investment in distribution networks was transferred to 21 privately owned regional entities under licences from EMRA. Electricity at voltages up to 36 kV is distributed by regional companies and many organized industrial zones.There are over a million kilometres of distribution lines, of which about 80% are overhead lines and the rest are underground cables. The average losses across all distribution networks (including both technical and non-technical losses) are around 12%. but in Dicle and Vangölü are over 20%.(cite EPDK 2022) In 2019 TEDAŞ estimated the System Average Interruption Duration Index (OKSÜRE in Turkish) at 1308, which is much worse than neighbouring European countries: however no estimate has been published since then.: 27  Nevertheless at least one distribution company measures it, together with the related frequency index (OKSIK in Turkish).: 73 There are plans for a smart grid. According to the Shura Energy Center, increasing Turkey's proportion of electric cars to 10% by 2030 would smooth distribution, amongst many other benefits.According to the Chamber of Electrical Engineers, the regional monopolies make excess profits. Their income is determined by EMRA, as distribution charges are set annually by EMRA. Resilience Earthquakes in Turkey are common and sometimes cut transmission lines and destroy substations. If the permanent supervisory control centre of a distribution grid is destroyed in a disaster a mobile centre may take control. The installation of more local solar power with batteries, and microgrids in vulnerable places, might help vital buildings such as hospitals retain power after a natural disaster, such as earthquake or flood. Academics suggest that cost–benefit analysis of such emergency power systems should take into account any benefits of resilience and also the cost of installing an islandable system. Market Energy Exchange Istanbul (EXIST), is the electricity market operator company responsible for the day-ahead and intra-day markets. EXIST was established in 2015 and operates under a license from the Energy Markets Regulatory Authority (EMRA). As of 2022 the wholesale price is the same across the country, but it has been suggested that price zones should be defined to reflect network congestion, for example in getting run-of-the river hydropower to consumers. The wholesale price is generally lowest in spring, due to moderate temperatures and abundant hydropower.: ? Although the wholesale market is operated by EXIST; prices are controlled by EUAŞ, the state electricity generation company. Gas-fired power stations set the market price. The National Load and Dispatch Centre prepares forward estimates of demand for each hour, and these are used to guide scheduling of generation 24 hours in advance.The Turkish Electricity Transmission Company (TEİAŞ) is the physical operator of the balancing power market and the ancillary services market. Because price is determined at the margin the electricity price is very dependent on the natural gas price. The government has capped the wholesale electricity price at thrice the average of the previous 12 months, which is high enough for gas and imported coal plants to remain in operation even when their fuel costs are high.: 14 Because gas-fired power plants are often the price setters, wholesale electricity prices are strongly influenced by wholesale natural gas prices, which are themselves influenced by the USD exchange rate.: 64  Owning over 20% of capacity,: 24  the state Electricity Generation Company is a key player in the market along with private wholesalers (such as Enerjisa, Cengiz, Eren, Limak and Çelikler: 52 ) and an over the counter market.: 9  In 2019 150 TWh, about half of the electricity generated, was traded on the day ahead spot market. Market pricing is not completely transparent, cost reflective and non-discriminatory. When the lira falls bilateral contracts are sometimes unable to compete with regulated tariffs: but when the exchange rate is stable industrial customers prefer bilateral contracts (almost no households are on those). In 2021 EXIST launched an electricity futures market.Although, as of 2021, there is a lot of excess generation capacity very little is exported. In 2021, Turkey exported 4.1 TWh and imported 2.3 TWh. International trade with some countries is hampered by geopolitical difficulties such as the Cyprus dispute; for example, Turkey will be bypassed by the EuroAsia Interconnector. Because TEIAŞ is not unbundled, it cannot become a full member of the European Network of Transmission System Operators for Electricity (ENTSO-E), but the grids are synchronised and there is technical co-operation. The grid is linked across most land borders, and about 1% of electricity is imported or exported. Technical studies are being done on increasing connections with the European grid. In 2022 export capacity to Iraq was increased from 150 MW to 500 MW.Some power barges supplying other countries burn heavy fuel oil but plan to convert to LNG. For exports to the EU the Carbon Border Adjustment Mechanism (CBAM) will be phased in from 2023 to 2026. Although Turkish electricity is likely to be cheaper than that generated in the EU, the impact of the CBAM is unclear as of 2021. More linking transmission is needed, and becoming a full member of ENTSO-E would help exports. Retailing Although the 2013 Electricity Market Law says that distribution companies cannot retail, most customers buy from retail "arms" of their local distribution companies. Households that consume over a certain amount, and all non-household customers, can switch suppliers. Retail price increases have often been due to depreciation of the lira.: 143  Pricing can vary by region,: 70  but there is some redistribution,: 43  and electricity is subsidized for about 2 million households.: 20  An example of a regional retail company is YEPAŞ (P = perakende = retail).European wiring color codes are used. Schuko plugs (plug type C with 2 round pins, and type F with 2 round pins and 2 earth clips) and sockets are standard, at 230 V and 50 Hz. For public charging of electric vehicles, the European standard Combined Charging System is used. As of 2022, there are no Tesla superchargers.After purchasing a property in an urban area, earthquake insurance is compulsory before electricity is connected. In case of natural disasters or pandemics the Ministry of Energy and Natural Resources may cover the financial costs resulting from the postponement (up to one year) of electricity bills, but not the bill amount itself. As of 2022 the VAT rate for residential customers and agricultural irrigation is 8%. Economics and finance As elsewhere, new renewables are auctioned. In 2019 the value-adjusted levelized cost of energy (VALCOE - the cost including power system value but not environmental externalities) of onshore wind was slightly less than solar PV, but solar PV is expected to become the most cost-competitive power generation technology by the late 2020s. According to the Chamber of Engineers 75% of electricity in 2021 was dollar indexed. In 2021 new wind and solar became cheaper than existing power stations burning imported coal. As of 2018, if all currently economic renewable projects were developed, the added electricity generation would be sufficient to reduce Turkey's natural gas imports by 20%, and every GW of solar power installed would save over $100 million on the gas bill. According to EMRA exports to the EU accompanied by YEK-G will be exempt from the electricity CBAM.: 88 As of 2019, about 15% of power was generated by the public sector. During the 2010s, power companies borrowed heavily in dollars, but economic growth was overestimated and they overbuilt generating capacity. This resulted in bank debts of $34 billion by 2019 and revenues declining in dollar terms due to the fall in the lira; furthermore, 7% of debts were non-performing. In the early 2020s, Turkish electricity companies still owe much foreign currency, debt is being restructured and plants are changing ownership. In 2021 BOTAŞ charged more for gas than before, leaving gas-fired power stations at a disadvantage to coal-fired power stations.About half the electricity used in 2019 was generated from local resources. Total import dependency in the power sector was over 50% in 2019. It has, for example, been predicted that more trade would benefit electricity in Bulgaria by stabilizing its price.The main growth in solar and wind during the 2020s is predicted to be in Renewable Energy Resource Areas(YEKA): these use auctions and include a requirement to manufacture mostly in Turkey. The EU has complained that local content requirements are against trade agreements. Build Own Operate is being used to construct Akkuyu nuclear plant to ensure that responsibility for cost overruns is with Rosatom. Power purchase agreements are offered by the government both for nuclear and local coal. The financing of the National Energy Efficiency Action Plan and continuation beyond 2023 is unclear. Capacity payments The capacity mechanism regulation says that the purpose of the payments to create sufficient installed power capacity, including the spare capacity required for supply security in the electricity market, and/or to maintain reliable installed power capacity for long-term system security. The 2021 capacity mechanism budget was 2.6 billion lira (US$ 460 million). Some hydropower plants, plants burning local coal, and plants older than 13 years burning imported fuel are eligible. In 2022 ten hydro plants, several gas power plants and many lignite-fired plants were eligible for the capacity mechanism: and capacity payments included variable cost components and the market exchange price, as well as fixed cost components and the total installed power capacity by source. These payments have been criticised by some economists. A study published in 2023 surveyed experts and found that most wanted the capacity mechanism to be reformed, for example by incuding demand response or zonal pricing: however policymakers were not keen on raising the price cap. Feed-in-tariffs As of 2021, feed-in-tariffs in lira per MWh are: wind and solar 320, hydro 400, geothermal 540, and various rates for different types of biomass: for all these there is also a bonus of 80 per MWh if local components are used. Tariffs will apply for 10 years and the local bonus for 5 years. Rates are determined by the presidency, and the scheme replaced the previous USD-denominated feed-in-tariffs for renewable energy. Thus, as in some other countries, the wholesale price of renewable electricity is much less volatile in local currency than the price of fossil fuelled electricity. End user pricing The complicated system of prices to end consumers is regulated by the government. A green tariff called YETA (the certificates are called YEK-G) to allow consumers to buy only sustainable electricity was introduced in 2021. The YETA price: 88  is higher than the regular price: 89  by a certain amount per kWh (about 1 lira in 2022).: 35 Electricity prices were greatly increased in early 2022 following a large depreciation of the lira in 2021. Household consumption under 210 kWh a month is priced at a cheaper rate. There is some time based pricing: with 2200 to 0600 being cheapest followed by 0600 to 1700, and 1700 to 2200 being the most expensive. According to Shura Energy Center moving to more time-based end user pricing would be beneficial: with prices being somewhat higher in the early morning and a lot higher in the late afternoon, as there is plenty of sunshine to meet demand in the middle of the day (see also duck curve). Shura suggested in 2020 that future pricing should be more competitive and better reflect costs, with low-income families being continued to be supported with direct payments. Vulnerable families are supported with direct payments for their electricity consumption up to 150 kWh/month. In early 2022, prices for small businesses became a political issue, as they had risen a lot due to global energy prices rises and the depreciation of the lira. There were street protests, and the main opposition Republican People’s Party leader Kemal Kılıçdaroğlu refused to pay his own bill in support. The president said that businesses would also be moved to a tiered pricing system, the number of households supported would be almost doubled to four million, and civil society organisations would be moved to the household rate.In 2023 Shura suggested that the electricity consumption tax (ETV or BTV) of 5% residential, was unfairly disadvantaging electricity over gas, for example by taxing electricity powering heat pumps more than gas for heating. They said that taxes and subsidies for residential gas and electricity should at least be equalized.: 17–18 Greenhouse-gas emissions Turkey's coal-fired power stations (many of which are subsidized) are the largest source of greenhouse-gas emissions by Turkey. Production of public heat and electricity emitted 131 megatonnes of CO2 equivalent (CO2e) in 2020,: table 1s1 cell B10  mainly through coal burning. Almost all coal burnt in power stations is local lignite or imported hard coal. Coal analysis of Turkish lignite compared to other lignites shows that it is high in ash and moisture, low in energy value and high in emission intensity (that is Turkish lignite emits more CO2 than other countries' lignites per unit of energy when burnt). Although imported hard coal has a lower emission intensity when burnt, as it is transported much further its life-cycle greenhouse-gas emissions are similar to lignite.: 177 Unlike other European countries emission intensity has not improved since 1990 and remains over 400 gm of CO2/kWh, around the average for G20 countries. Investment in wind and solar is hampered by subsidies for coal.: 10  According to a 2021 study by several NGOs if coal power subsidies were completely abolished and a carbon price introduced at around US$40 (which is much cheaper than the EU Allowance) then no coal power plants would be profitable and all would close down before 2030. A 2021 decarbonization plan by Istanbul Policy Center, a thinktank, has almost all coal power shutdown by 2035; whereas natural gas plants would continue to run to provide flexibility for greatly increased wind and solar, but at a much lower capacity factor.The Turkish Solar İndustry Association suggests that building solar plants next to hydropower would help to stabilize output in times of drought. Shura also suggest that excess renewable electricity could be used to produce green hydrogen. Turkey is not aligned with the EU carbon capture and storage directive. Policy and regulation As of 2020 Turkey's three main policy objectives are to meet forecast increased demand, a predictable market, and to reduce import costs. To meet these objectives policy includes increasing generation from solar, wind and domestic coal; and starting to produce nuclear energy. As of 2022 some of these generation methods are subsidized - for example EÜAŞ will purchase the forthcoming nuclear power at an agreed price. Coal is heavily subsidized in Turkey. Storage and transmission improvements are also supported - for example increasing the amount of pumped hydro.The government aims for half of electricity to be from renewable energy by 2023; with capacity targets of 32 GW for hydropower, 12 GW for wind, 10 GW for solar, and 3 GW for biomass and geothermal combined. Shura Energy Transition Center have suggested that longer-term plans and targets would also be useful, together with a policy on distributed generation, market design to incentivize grid flexibility was also suggested. The objectives are developing local manufacturing capacity such as wind turbines, technology transfer and creating a competitive domestic market for low-cost renewable energy. For wind and solar tenders, there is a high domestic content requirement, and imported solar modules are taxed. According to the European Commission the domestic content requirements contradict World Trade Organization and EU-Turkey Customs Union rules. A solar PV factory was opened in 2020. Developing regulation to specify the role of aggregators in providing flexibility, and including energy storage systems and demand side management within ancillary services, has been suggested.In 2023 the Chamber of Mechanical Engineers criticised the just published National Energy plan as amateurish: they said that it forecast generation of 174 TWh in 2035 with 57 GW of fossil fuel power plants but that, in 2021, 215 TWh was generated from 46 GW installed. History In 1875 a French company was awarded a 5 year concession to power Istanbul's Üsküdar district, Thessaloniki and Edirne, and was awarded a 4-year concession for electric lighting of several other cities. However, despite the agreement, no progress was made.: 3  The first power station in the Ottoman Empire was a small hydroelectric power station built in 1902 outside Tarsus. Electricity was transmitted to the city centre at high voltage, then distributed to customers at low voltage for their lighting. During this period tenders for power were generally awarded to foreigners, due to lack of Ottoman finance and expertise.: 72, 73 Generating power in Istanbul for tramlines, lighting and the telephone network from 1914, Silahtarağa Power Station (now a museum that is part of SantralIstanbul) was the first large power station. By the start of the Turkish Republic in 1923, one in twenty people was supplied with electricity. Between 1925 and 1933, many cities built diesel fired power stations, and a couple were powered by wood gas.: 4 The electricity sector was nationalized in the late 1930s and early 1940s, and by the end of nationalization, almost a quarter of the population was supplied with electricity. However only big cities such as Istanbul, Ankara and Izmir received continuous electricity in the 1950s; other cities were electrified only between dusk and 10 or 11 in the evening.: 243 The Turkish Electricity Authority was created in 1970 and consolidated almost all of the sector. By the end of the 20th century, almost all the population was supplied with electricity. Privatization of the electricity sector started in 1984 and began "in earnest" in 2004 after the Electricity Market Law was passed in 2001.In 2009 electricity demand fell due to the Great Recession.: 14  In 2015, there was a one day national blackout, and an independent energy exchange was created. Also in the 2010s, the grid was synchronized with continental Europe, and the Turkish Electricity Transmission Corporation (TEİAŞ) joined the European Network of Transmission System Operators (ENTSO-E) as an observer - although they later left. Energy efficiency and generation goals were set for 2023, the centenary of the establishment of modern Turkey. Notes References Sources Ayas, Ceren (2020). Decarbonization of Turkey's economy: long-term strategies and immediate challenges (Report). CAN Europe, SEE Change Net, TEPAV.Difiglio, Prof. Carmine; Güray, Bora Şekip; Merdan, Ersin (November 2020). Turkey Energy Outlook. iicec.sabanciuniv.edu (Report). Sabanci University Istanbul International Center for Energy and Climate (IICEC). ISBN 978-605-70031-9-5.Godron, Philipp; Cebeci, Mahmut Erkut; Tör, Osman Bülent; Saygın, Değer (2018). Increasing the Share of Renewables in Turkey's Power System:Options for Transmission Expansion and Flexibility (PDF) (Report). SHURA Energy Transition Center. ISBN 978-605-2095-22-5.IEA (March 2021). Turkey 2021 – Energy Policy Review (Technical report). International Energy Agency.TEİAŞ 2019-2023 Strateji̇k Plani [Turkish Electricity Transmission Corporation 2019-2023 Strategic Plan] (Report) (in Turkish). Turkish Electricity Transmission Corporation.Godron, Philipp; Saygın, Değer (2018). Lessons from global experiences for accelerating energy transition in Turkey through solar and wind power (PDF) (Report). SHURA Energy Transition Center. ISBN 978-605-2095-40-9.Saygın, Değer; Tör, Osman Bülent; Teimourzadeh, Saeed; Koç, Mehmet; Hildermeier, Julia; Kolokathis, Christos (December 2019). Transport sector transformation: Integrating electric vehicles into Turkey's distribution grids (PDF). SHURA Energy Transition Center (Report).Turkey 2021 Report (see chapters: 15 Energy, 27 Environment and Climate change) (Report). European Commission. 2021.Turkey's Energy Transition: Milestones and Challenges (PDF) (Report). World Bank. 2015.Investor's Guide for Electricity Sector in Turkey (Report). Ministry of Energy and Natural Resources (Turkey). December 2020.Turkey Smart Grid 2023 Vision and Strategy Roadmap Summary Report (Report). ELDER, Association of Distribution System Operators. 2018.Akyazı, Pınar Ertör; Sperfeld, Franziska; Helgenberger, Sebastian; Şahin, Ümit; Nagel, Laura, eds. (December 2020). Cobenefits Policy Report: Unlocking the co-benefits of decarbonising Turkey's power sector (Report). IASS IPC/UfU. Further reading National Energy Plan to 2035 published 2022 External links Markets, generation and consumption short term statistics Energy Exchange Istanbul Hourly generation by source for selected day Turkish Electricity Transmission Corporation Annual generation statistics (in Turkish) Turkish Electricity Transmission Corporation Retail prices Power flow simulator Association of Distribution System Operators Smart Grid Turkey Live carbon emissions from electricity generation electricityMap Live built by Tomorrow
oil sands
Oil sands, tar sands, crude bitumen, or bituminous sands, are a type of unconventional petroleum deposit. Oil sands are either loose sands or partially consolidated sandstone containing a naturally occurring mixture of sand, clay, and water, soaked with bitumen, a dense and extremely viscous form of petroleum. Significant bitumen deposits are reported in Canada, Kazakhstan, Russia, and Venezuela. The estimated worldwide deposits of oil are more than 2 trillion barrels (320 billion cubic metres); Proven reserves of bitumen contain approximately 100 billion barrels, and total natural bitumen reserves are estimated at 249.67 Gbbl (39.694×10^9 m3) worldwide, of which 176.8 Gbbl (28.11×10^9 m3), or 70.8%, are in Alberta, Canada.Crude bitumen is a thick, sticky form of crude oil, and is so viscous that it will not flow unless heated or diluted with lighter hydrocarbons such as light crude oil or natural-gas condensate. At room temperature, it is much like cold molasses. The Orinoco Belt in Venezuela is sometimes described as oil sands, but these deposits are non-bituminous, falling instead into the category of heavy or extra-heavy oil due to their lower viscosity. Natural bitumen and extra-heavy oil differ in the degree by which they have been degraded from the original conventional oils by bacteria. The 1973 and 1979 oil price increases, and the development of improved extraction technology enabled profitable extraction and processing of the oil sands. Together with other so-called unconventional oil extraction practices, oil sands are implicated in the unburnable carbon debate but also contribute to energy security and counteract the international price cartel OPEC. According to the Oil Climate Index, carbon emissions from oil-sand crude are 31% higher than from conventional oil. In Canada, oil sands production in general, and in-situ extraction, in particular, are the largest contributors to the increase in the nation's greenhouse gas emissions from 2005 to 2017, according to Natural Resources Canada (NRCan). History The exploitation of bituminous deposits and seeps dates back to Paleolithic times. The earliest known use of bitumen was by Neanderthals, some 40,000 years ago. Bitumen has been found adhering to stone tools used by Neanderthals at sites in Syria. After the arrival of Homo sapiens, humans used bitumen for construction of buildings and waterproofing of reed boats, among other uses. In ancient Egypt, the use of bitumen was important in preparing mummies.In ancient times, bitumen was primarily a Mesopotamian commodity used by the Sumerians and Babylonians, although it was also found in the Levant and Persia. The area along the Tigris and Euphrates rivers was littered with hundreds of pure bitumen seepages. The Mesopotamians used the bitumen for waterproofing boats and buildings. In Europe, they were extensively mined near the French city of Pechelbronn, where the vapour separation process was in use in 1742.In Canada, the First Nation peoples had used bitumen from seeps along the Athabasca and Clearwater Rivers to waterproof their birch bark canoes from early prehistoric times. The Canadian oil sands first became known to Europeans in 1719 when a Cree native named Wa-Pa-Su brought a sample to Hudson's Bay Company fur trader Henry Kelsey, who commented on it in his journals. Fur trader Peter Pond paddled down the Clearwater River to Athabasca in 1778, saw the deposits and wrote of "springs of bitumen that flow along the ground". In 1787, fur trader and explorer Alexander MacKenzie on his way to the Arctic Ocean saw the Athabasca oil sands, and commented, "At about 24 miles from the fork (of the Athabasca and Clearwater Rivers) are some bituminous fountains into which a pole of 20 feet long may be inserted without the least resistance." Cost of oil sands petroleum-mining operations In their May 2019 comparison of the "cost of supply curve update" in which the Norway-based Rystad Energy—an "independent energy research and consultancy"—ranked the "worlds total recoverable liquid resources by their breakeven price", Rystad reported that the average breakeven price for oil from the oil sands was US$83 in 2019, making it the most expensive to produce, compared to all other "significant oil producing regions" in the world. The International Energy Agency made similar comparisons.The price per barrel of heavier, sour crude oils lacking in tidewater access—such as Western Canadian Select (WCS) from the Athabaska oil sands, are priced at a differential to the lighter, sweeter oil—such as West Texas Intermediate (WTI). The price is based on its grade—determined by factors such as its specific gravity or API and its sulfur content—and its location—for example, its proximity to tidewater and/or refineries. Because the cost of production is so much higher at oil sands petroleum-mining operations, the breakeven point is much higher than for sweeter lighter oils like that produced by Saudi Arabia, Iran, Iraq, and, the United States. Oil sands productions expand and prosper as the global price of oil increased to peak highs because of the Arab oil embargo of 1973, the 1979 Iranian Revolution, the 1990 Persian Gulf crisis and war, the 11 September 2001 attacks, and the 2003 invasion of Iraq. The boom periods were followed by the bust, as the global price of oil dropped during the 1980s and again in the 1990s, during a period of global recessions, and again in 2003. Nomenclature The name tar sands was applied to bituminous sands in the late 19th and early 20th century. People who saw the bituminous sands during this period were familiar with the large amounts of tar residue produced in urban areas as a by-product of the manufacture of coal gas for urban heating and lighting. The word "tar" to describe these natural bitumen deposits is really a misnomer, since, chemically speaking, tar is a human-made substance produced by the destructive distillation of organic material, usually coal.Since then, coal gas has almost completely been replaced by natural gas as a fuel, and coal tar as a material for paving roads has been replaced by the petroleum product asphalt. Naturally occurring bitumen is chemically more similar to asphalt than to coal tar, and the term oil sands (or oilsands) is more commonly used by industry in the producing areas than tar sands because synthetic oil is manufactured from the bitumen, and due to the feeling that the terminology of tar sands is less politically acceptable to the public. Oil sands are now an alternative to conventional crude oil. Geology The world's largest deposits of oil sands are in Venezuela and Canada. The geology of the deposits in the two countries is generally rather similar. They are vast heavy oil, extra-heavy oil, and/or bitumen deposits with oil heavier than 20°API, found largely in unconsolidated sandstones with similar properties. "Unconsolidated" in this context means that the sands have high porosity, no significant cohesion, and a tensile strength close to zero. The sands are saturated with oil which has prevented them from consolidating into hard sandstone. Size of resources The magnitude of the resources in the two countries is on the order of 3.5 to 4 trillion barrels (550 to 650 billion cubic metres) of original oil in place (OOIP). Oil in place is not necessarily oil reserves, and the amount that can be produced depends on technological evolution. Rapid technological developments in Canada in the 1985–2000 period resulted in techniques such as steam-assisted gravity drainage (SAGD) that can recover a much greater percentage of the OOIP than conventional methods. The Alberta government estimates that with current technology, 10% of its bitumen and heavy oil can be recovered, which would give it about 200 billion barrels (32 billion m3) of recoverable oil reserves. Venezuela estimates its recoverable oil at 267 billion barrels (42 billion m3). This places Canada and Venezuela in the same league as Saudi Arabia, having the three largest oil reserves in the world. Major deposits There are numerous deposits of oil sands in the world, but the biggest and most important are in Canada and Venezuela, with lesser deposits in Kazakhstan and Russia. The total volume of non-conventional oil in the oil sands of these countries exceeds the reserves of conventional oil in all other countries combined. Vast deposits of bitumen—over 350 billion cubic metres (2.2 trillion barrels) of oil in place—exist in the Canadian provinces of Alberta and Saskatchewan. If only 30% of this oil could be extracted, it could supply the entire needs of North America for over 100 years at 2002 consumption levels. These deposits represent plentiful oil, but not cheap oil. They require advanced technology to extract the oil and transport it to oil refineries. Canada The oil sands of the Western Canadian Sedimentary Basin (WCSB) are a result of the formation of the Canadian Rocky Mountains by the Pacific Plate overthrusting the North American Plate as it pushed in from the west, carrying the formerly large island chains which now compose most of British Columbia. The collision compressed the Alberta plains and raised the Rockies above the plains, forming mountain ranges. This mountain building process buried the sedimentary rock layers which underlie most of Alberta to a great depth, creating high subsurface temperatures, and producing a giant pressure cooker effect that converted the kerogen in the deeply buried organic-rich shales to light oil and natural gas. These source rocks were similar to the American so-called oil shales, except the latter have never been buried deep enough to convert the kerogen in them into liquid oil. This overthrusting also tilted the pre-Cretaceous sedimentary rock formations underlying most of the sub-surface of Alberta, depressing the rock formations in southwest Alberta up to 8 km (5 mi) deep near the Rockies, but to zero depth in the northeast, where they pinched out against the igneous rocks of the Canadian Shield, which outcrop on the surface. This tilting is not apparent on the surface because the resulting trench has been filled in by eroded material from the mountains. The light oil migrated up-dip through hydro-dynamic transport from the Rockies in the southwest toward the Canadian Shield in the northeast following a complex pre-Cretaceous unconformity that exists in the formations under Alberta. The total distance of oil migration southwest to northeast was about 500 to 700 km (300 to 400 mi). At the shallow depths of sedimentary formations in the northeast, massive microbial biodegradation as the oil approached the surface caused the oil to become highly viscous and immobile. Almost all of the remaining oil is found in the far north of Alberta, in Middle Cretaceous (115 million-year old) sand-silt-shale deposits overlain by thick shales, although large amounts of heavy oil lighter than bitumen are found in the Heavy Oil Belt along the Alberta-Saskatchewan border, extending into Saskatchewan and approaching the Montana border. Note that, although adjacent to Alberta, Saskatchewan has no massive deposits of bitumen, only large reservoirs of heavy oil >10°API.Most of the Canadian oil sands are in three major deposits in northern Alberta. They are the Athabasca-Wabiskaw oil sands of north northeastern Alberta, the Cold Lake deposits of east northeastern Alberta, and the Peace River deposits of northwestern Alberta. Between them, they cover over 140,000 square kilometres (54,000 sq mi)—an area larger than England—and contain approximately 1.75 Tbbl (280×10^9 m3) of crude bitumen in them. About 10% of the oil in place, or 173 Gbbl (27.5×10^9 m3), is estimated by the government of Alberta to be recoverable at current prices, using current technology, which amounts to 97% of Canadian oil reserves and 75% of total North American petroleum reserves. Although the Athabasca deposit is the only one in the world which has areas shallow enough to mine from the surface, all three Alberta areas are suitable for production using in-situ methods, such as cyclic steam stimulation (CSS) and steam-assisted gravity drainage (SAGD). The largest Canadian oil sands deposit, the Athabasca oil sands is in the McMurray Formation, centered on the city of Fort McMurray, Alberta. It outcrops on the surface (zero burial depth) about 50 km (30 mi) north of Fort McMurray, where enormous oil sands mines have been established, but is 400 m (1,300 ft) deep southeast of Fort McMurray. Only 3% of the oil sands area containing about 20% of the recoverable oil can be produced by surface mining, so the remaining 80% will have to be produced using in-situ wells. The other Canadian deposits are between 350 and 900 m (1,000 and 3,000 ft) deep and will require in-situ production. Athabasca Cold Lake The Cold Lake oil sands are northeast of Alberta's capital, Edmonton, near the border with Saskatchewan. A small portion of the Cold Lake deposit lies in Saskatchewan. Although smaller than the Athabasca oil sands, the Cold Lake oil sands are important because some of the oil is fluid enough to be extracted by conventional methods. The Cold Lake bitumen contains more alkanes and less asphaltenes than the other major Alberta oil sands and the oil is more fluid. As a result, cyclic steam stimulation (CSS) is commonly used for production. The Cold Lake oil sands are of a roughly circular shape, centered around Bonnyville, Alberta. They probably contain over 60 billion cubic metres (370 billion barrels) of extra-heavy oil-in-place. The oil is highly viscous, but considerably less so than the Athabasca oil sands, and is somewhat less sulfurous. The depth of the deposits is 400 to 600 metres (1,300 to 2,000 ft) and they are from 15 to 35 metres (49 to 115 ft) thick. They are too deep to surface mine. Much of the oil sands are on Canadian Forces Base Cold Lake. CFB Cold Lake's CF-18 Hornet jet fighters defend the western half of Canadian air space and cover Canada's Arctic territory. Cold Lake Air Weapons Range (CLAWR) is one of the largest live-drop bombing ranges in the world, including testing of cruise missiles. As oil sands production continues to grow, various sectors vie for access to airspace, land, and resources, and this complicates oil well drilling and production significantly. Peace River Venezuela The Eastern Venezuelan Basin has a structure similar to the WCSB, but on a shorter scale. The distance the oil has migrated up-dip from the Sierra Oriental mountain front to the Orinoco oil sands where it pinches out against the igneous rocks of the Guyana Shield is only about 200 to 300 km (100 to 200 mi). The hydrodynamic conditions of oil transport were similar, source rocks buried deep by the rise of the mountains of the Sierra Orientale produced light oil that moved up-dip toward the south until it was gradually immobilized by the viscosity increase caused by biodgradation near the surface. The Orinoco deposits are early Tertiary (50 to 60 million years old) sand-silt-shale sequences overlain by continuous thick shales, much like the Canadian deposits. In Venezuela, the Orinoco Belt oil sands range from 350 to 1,000 m (1,000 to 3,000 ft) deep and no surface outcrops exist. The deposit is about 500 km (300 mi) long east-to-west and 50 to 60 km (30 to 40 mi) wide north-to-south, much less than the combined area covered by the Canadian deposits. In general, the Canadian deposits are found over a much wider area, have a broader range of properties, and have a broader range of reservoir types than the Venezuelan ones, but the geological structures and mechanisms involved are similar. The main differences is that the oil in the sands in Venezuela is less viscous than in Canada, allowing some of it to be produced by conventional drilling techniques, but none of it approaches the surface as in Canada, meaning none of it can be produced using surface mining. The Canadian deposits will almost all have to be produced by mining or using new non-conventional techniques. Orinoco The Orinoco Belt is a territory in the southern strip of the eastern Orinoco River Basin in Venezuela which overlies one of the world's largest deposits of petroleum. The Orinoco Belt follows the line of the river. It is approximately 600 kilometres (370 mi) from east to west, and 70 kilometres (43 mi) from north to south, with an area about 55,314 square kilometres (21,357 sq mi). The oil sands consist of large deposits of extra heavy crude. Venezuela's heavy oil deposits of about 1,200 Gbbl (190×10^9 m3) of oil in place are estimated to approximately equal the world's reserves of lighter oil.In 2009, the US Geological Survey (USGS) increased its estimates of the reserves to 513 Gbbl (81.6×10^9 m3) of oil which is "technically recoverable (producible using currently available technology and industry practices)." No estimate of how much of the oil is economically recoverable was made. Other deposits In addition to the three major Canadian oil sands in Alberta, there is a fourth major oil sands deposit in Canada, the Melville Island oil sands in the Canadian Arctic islands, which are too remote to expect commercial production in the foreseeable future. Apart from the megagiant oil sands deposits in Canada and Venezuela, numerous other countries hold smaller oil sands deposits. In the United States, there are supergiant oil sands resources primarily concentrated in Eastern Utah, with a total of 32 Gbbl (5.1×10^9 m3) of oil (known and potential) in eight major deposits in Carbon, Garfield, Grand, Uintah, and Wayne counties. In addition to being much smaller than the Canadian oil sands deposits, the US oil sands are hydrocarbon-wet, whereas the Canadian oil sands are water-wet. This requires somewhat different extraction techniques for the Utah oil sands from those used for the Alberta oil sands. Russia holds oil sands in two main regions. Large resources are present in the Tunguska Basin, East Siberia, with the largest deposits being Olenyok and Siligir. Other deposits are located in the Timan-Pechora and Volga-Urals basins (in and around Tatarstan), which is an important but very mature province in terms of conventional oil, holds large amounts of oil sands in a shallow Permian formation. In Kazakhstan, large bitumen deposits are located in the North Caspian Basin. In Madagascar, Tsimiroro and Bemolanga are two heavy oil sands deposits, with a pilot well already producing small amounts of oil in Tsimiroro. and larger scale exploitation in the early planning phase. In the Republic of the Congo reserves are estimated between 0.5 and 2.5 Gbbl (79×10^6 and 397×10^6 m3). Production Bituminous sands are a major source of unconventional oil, although only Canada has a large-scale commercial oil sands industry. In 2006, bitumen production in Canada averaged 1.25 Mbbl/d (200,000 m3/d) through 81 oil sands projects. 44% of Canadian oil production in 2007 was from oil sands. This proportion was (as of 2008) expected to increase in coming decades as bitumen production grows while conventional oil production declines, although due to the 2008 economic downturn work on new projects has been deferred. Petroleum is not produced from oil sands on a significant level in other countries. Canada The Alberta oil sands have been in commercial production since the original Great Canadian Oil Sands (now Suncor Energy) mine began operation in 1967. Syncrude's second mine began operation in 1978 and is the biggest mine of any type in the world. The third mine in the Athabasca Oil Sands, the Albian Sands consortium of Shell Canada, Chevron Corporation, and Western Oil Sands Inc. (purchased by Marathon Oil Corporation in 2007) began operation in 2003. Petro-Canada was also developing a $33 billion Fort Hills Project, in partnership with UTS Energy Corporation and Teck Cominco, which lost momentum after the 2009 merger of Petro-Canada into Suncor.By 2013 there were nine oil sands mining projects in the Athabasca oil sands deposit: Suncor Energy Inc. (Suncor), Syncrude Canada Limited (Syncrude)'s Mildred Lake and Aurora North, Shell Canada Limited (Shell)'s Muskeg River and Jackpine, Canadian Natural Resources Limited (CNRL)'s Horizon, Imperial Oil Resources Ventures Limited (Imperial), Kearl Oil Sands Project (KOSP), Total E&P Canada Ltd. Joslyn North Mine and Fort Hills Energy Corporation (FHEC). In 2011 alone they produced over 52 million cubic metres of bitumen.Canadian oil sand extraction has created extensive environmental damage, and many first nations peoples, scientists, lawyers, journalists and environmental groups have described Canadian oil sands mining as an ecocide. Venezuela No significant development of Venezuela's extra-heavy oil deposits was undertaken before 2000, except for the BITOR operation which produced somewhat less than 100,000 barrels of oil per day (16,000 m3/d) of 9°API oil by primary production. This was mostly shipped as an emulsion (Orimulsion) of 70% oil and 30% water with similar characteristics as heavy fuel oil for burning in thermal power plants. However, when a major strike hit the Venezuelan state oil company PDVSA, most of the engineers were fired as punishment. Orimulsion had been the pride of the PDVSA engineers, so Orimulsion fell out of favor with the key political leaders. As a result, the government has been trying to "Wind Down" the Orimulsion program.Despite the fact that the Orinoco oil sands contain extra-heavy oil which is easier to produce than Canada's similarly sized reserves of bitumen, Venezuela's oil production has been declining in recent years because of the country's political and economic problems, while Canada's has been increasing. As a result, Canadian heavy oil and bitumen exports have been backing Venezuelan heavy and extra-heavy oil out of the US market, and Canada's total exports of oil to the US have become several times as great as Venezuela's. By 2016, with the economy of Venezuela in a tailspin and the country experiencing widespread shortages of food, rolling power blackouts, rioting, and anti-government protests, it was unclear how much new oil sands production would occur in the near future. Other countries In May 2008, the Italian oil company Eni announced a project to develop a small oil sands deposit in the Republic of the Congo. Production is scheduled to commence in 2014 and is estimated to eventually yield a total of 40,000 bbl/d (6,400 m3/d). Methods of extraction Except for a fraction of the extra-heavy oil or bitumen which can be extracted by conventional oil well technology, oil sands must be produced by strip mining or the oil made to flow into wells using sophisticated in-situ techniques. These methods usually use more water and require larger amounts of energy than conventional oil extraction. While much of Canada's oil sands are being produced using open-pit mining, approximately 90% of Canadian oil sands and all of Venezuela's oil sands are too far below the surface to use surface mining. Primary production Conventional crude oil is normally extracted from the ground by drilling oil wells into a petroleum reservoir, allowing oil to flow into them under natural reservoir pressures, although artificial lift and techniques such as horizontal drilling, water flooding and gas injection are often required to maintain production. When primary production is used in the Venezuelan oil sands, where the extra-heavy oil is about 50 degrees Celsius, the typical oil recovery rates are about 8–12%. Canadian oil sands are much colder and more biodegraded, so bitumen recovery rates are usually only about 5–6%. Historically, primary recovery was used in the more fluid areas of Canadian oil sands. However, it recovered only a small fraction of the oil in place, so it is not often used today. Surface mining The Athabasca oil sands are the only major oil sands deposits which are shallow enough to surface mine. In the Athabasca sands there are very large amounts of bitumen covered by little overburden, making surface mining the most efficient method of extracting it. The overburden consists of water-laden muskeg (peat bog) overtop of clay and barren sand. The oil sands themselves are typically 40 to 60 metres (130 to 200 ft) thick deposits of crude bitumen embedded in unconsolidated sandstone, sitting on top of flat limestone rock. Since Great Canadian Oil Sands (now Suncor Energy) started operation of the first large-scale oil sands mine in 1967, bitumen has been extracted on a commercial scale and the volume has grown at a steady rate ever since. A large number of oil sands mines are currently in operation and more are in the stages of approval or development. The Syncrude Canada mine was the second to open in 1978, Shell Canada opened its Muskeg River mine (Albian Sands) in 2003 and Canadian Natural Resources Ltd (CNRL) opened its Horizon Oil Sands project in 2009. Newer mines include Shell Canada's Jackpine mine, Imperial Oil's Kearl Oil Sands Project, the Synenco Energy (now owned by TotalEnergies) Northern Lights mine, and Suncor's Fort Hills mine. Oil sands tailings ponds Oil sands tailings ponds are engineered dam and dyke systems that contain salts, suspended solids and other dissolvable chemical compounds such as naphthenic acids, benzene, hydrocarbons residual bitumen, fine silts (mature fine tails MFT), and water. Large volumes of tailings are a byproduct of surface mining of the oil sands and managing these tailings are one of the most damaging aspects of tar sands. The Government of Alberta reported in 2013 that tailings ponds in the Alberta oil sands covered an area of about 77 square kilometres (30 sq mi). The Syncrude Tailings Dam or Mildred Lake Settling Basin (MLSB) is an embankment dam that is, by volume of construction material, the largest earth structure in the world in 2001. Cold Heavy Oil Production with Sand (CHOPS) Some years ago Canadian oil companies discovered that if they removed the sand filters from heavy oil wells and produced as much sand as possible with the oil, production rates improved significantly. This technique became known as Cold Heavy Oil Production with Sand (CHOPS). Further research disclosed that pumping out sand opened "wormholes" in the sand formation which allowed more oil to reach the wellbore. The advantage of this method is better production rates and recovery (around 10% versus 5–6% with sand filters in place) and the disadvantage that disposing of the produced sand is a problem. A novel way to do this was spreading it on rural roads, which rural governments liked because the oily sand reduced dust and the oil companies did their road maintenance for them. However, governments have become concerned about the large volume and composition of oil spread on roads. so in recent years disposing of oily sand in underground salt caverns has become more common. Cyclic Steam Stimulation (CSS) The use of steam injection to recover heavy oil has been in use in the oil fields of California since the 1950s. The cyclic steam stimulation (CSS) "huff-and-puff" method is now widely used in heavy oil production worldwide due to its quick early production rates; however recovery factors are relatively low (10–40% of oil in place) compared to SAGD (60–70% of OIP).CSS has been in use by Imperial Oil at Cold Lake since 1985 and is also used by Canadian Natural Resources at Primrose and Wolf Lake and by Shell Canada at Peace River. In this method, the well is put through cycles of steam injection, soak, and oil production. First, steam is injected into a well at a temperature of 300 to 340 degrees Celsius for a period of weeks to months; then, the well is allowed to sit for days to weeks to allow heat to soak into the formation; and, later, the hot oil is pumped out of the well for a period of weeks or months. Once the production rate falls off, the well is put through another cycle of injection, soak and production. This process is repeated until the cost of injecting steam becomes higher than the money made from producing oil. Steam-assisted gravity drainage (SAGD) Steam-assisted gravity drainage was developed in the 1980s by the Alberta Oil Sands Technology and Research Authority and fortuitously coincided with improvements in directional drilling technology that made it quick and inexpensive to do by the mid 1990s. In SAGD, two horizontal wells are drilled in the oil sands, one at the bottom of the formation and another about 5 metres above it. These wells are typically drilled in groups off central pads and can extend for miles in all directions. In each well pair, steam is injected into the upper well, the heat melts the bitumen, which allows it to flow into the lower well, where it is pumped to the surface.SAGD has proved to be a major breakthrough in production technology since it is cheaper than CSS, allows very high oil production rates, and recovers up to 60% of the oil in place. Because of its economic feasibility and applicability to a vast area of oil sands, this method alone quadrupled North American oil reserves and allowed Canada to move to second place in world oil reserves after Saudi Arabia. Most major Canadian oil companies now have SAGD projects in production or under construction in Alberta's oil sands areas and in Wyoming. Examples include Japan Canada Oil Sands Ltd's (JACOS) project, Suncor's Firebag project, Nexen's Long Lake project, Suncor's (formerly Petro-Canada's) MacKay River project, Husky Energy's Tucker Lake and Sunrise projects, Shell Canada's Peace River project, Cenovus Energy's Foster Creek and Christina Lake developments, ConocoPhillips' Surmont project, Devon Canada's Jackfish project, and Derek Oil & Gas's LAK Ranch project. Alberta's OSUM Corp has combined proven underground mining technology with SAGD to enable higher recovery rates by running wells underground from within the oil sands deposit, thus also reducing energy requirements compared to traditional SAGD. This particular technology application is in its testing phase. Vapor Extraction (VAPEX) Several methods use solvents, instead of steam, to separate bitumen from sand. Some solvent extraction methods may work better in in situ production and other in mining. Solvent can be beneficial if it produces more oil while requiring less energy to produce steam. Vapor Extraction Process (VAPEX) is an in situ technology, similar to SAGD. Instead of steam, hydrocarbon solvents are injected into an upper well to dilute bitumen and enables the diluted bitumen to flow into a lower well. It has the advantage of much better energy efficiency over steam injection, and it does some partial upgrading of bitumen to oil right in the formation. The process has attracted attention from oil companies, who are experimenting with it. The above methods are not mutually exclusive. It is becoming common for wells to be put through one CSS injection-soak-production cycle to condition the formation prior to going to SAGD production, and companies are experimenting with combining VAPEX with SAGD to improve recovery rates and lower energy costs. Toe to Heel Air Injection (THAI) This is a very new and experimental method that combines a vertical air injection well with a horizontal production well. The process ignites oil in the reservoir and creates a vertical wall of fire moving from the "toe" of the horizontal well toward the "heel", which burns the heavier oil components and upgrades some of the heavy bitumen into lighter oil right in the formation. Historically fireflood projects have not worked out well because of difficulty in controlling the flame front and a propensity to set the producing wells on fire. However, some oil companies feel the THAI method will be more controllable and practical, and have the advantage of not requiring energy to create steam.Advocates of this method of extraction state that it uses less freshwater, produces 50% less greenhouse gases, and has a smaller footprint than other production techniques.Petrobank Energy and Resources has reported encouraging results from their test wells in Alberta, with production rates of up to 400 bbl/d (64 m3/d) per well, and the oil upgraded from 8 to 12 API degrees. The company hopes to get a further 7-degree upgrade from its CAPRI (controlled atmospheric pressure resin infusion) system, which pulls the oil through a catalyst lining the lower pipe.After several years of production in situ, it has become clear that current THAI methods do not work as planned. Amid steady drops in production from their THAI wells at Kerrobert, Petrobank has written down the value of their THAI patents and the reserves at the facility to zero. They have plans to experiment with a new configuration they call "multi-THAI," involving adding more air injection wells. Combustion Overhead Gravity Drainage (COGD) This is an experimental method that employs a number of vertical air injection wells above a horizontal production well located at the base of the bitumen pay zone. An initial Steam Cycle similar to CSS is used to prepare the bitumen for ignition and mobility. Following that cycle, air is injected into the vertical wells, igniting the upper bitumen and mobilizing (through heating) the lower bitumen to flow into the production well. It is expected that COGD will result in water savings of 80% compared to SAGD. Froth treatment Energy balance Approximately 1.0–1.25 gigajoules (280–350 kWh) of energy is needed to extract a barrel of bitumen and upgrade it to synthetic crude. As of 2006, most of this is produced by burning natural gas. Since a barrel of oil equivalent is about 6.117 gigajoules (1,699 kWh), its EROEI is 5–6. That means this extracts about 5 or 6 times as much energy as is consumed. Energy efficiency is expected to improve to an average of 900 cubic feet (25 m3) of natural gas or 0.945 gigajoules (262 kWh) of energy per barrel by 2015, giving an EROEI of about 6.5.Alternatives to natural gas exist and are available in the oil sands area. Bitumen can itself be used as the fuel, consuming about 30–35% of the raw bitumen per produced unit of synthetic crude. Nexen's Long Lake project will use a proprietary deasphalting technology to upgrade the bitumen, using asphaltene residue fed to a gasifier whose syngas will be used by a cogeneration turbine and a hydrogen producing unit, providing all the energy needs of the project: steam, hydrogen, and electricity. Thus, it will produce syncrude without consuming natural gas, but the capital cost is very high. Shortages of natural gas for project fuel were forecast to be a problem for Canadian oil sands production a few years ago, but recent increases in US shale gas production have eliminated much of the problem for North America. With the increasing use of hydraulic fracturing making US largely self-sufficient in natural gas and exporting more natural gas to Eastern Canada to replace Alberta gas, the Alberta government is using its powers under the NAFTA and the Canadian Constitution to reduce shipments of natural gas to the US and Eastern Canada, and divert the gas to domestic Alberta use, particularly for oil sands fuel. The natural gas pipelines to the east and south are being converted to carry increasing oil sands production to these destinations instead of gas. Canada also has huge undeveloped shale gas deposits in addition to those of the US, so natural gas for future oil sands production does not seem to be a serious problem. The low price of natural gas as the result of new production has considerably improved the economics of oil sands production. Upgrading and blending The extra-heavy crude oil or crude bitumen extracted from oil sands is a very viscous semisolid form of oil that does not easily flow at normal temperatures, making it difficult to transport to market by pipeline. To flow through oil pipelines, it must either be upgraded to lighter synthetic crude oil (SCO), blended with diluents to form dilbit, or heated to reduce its viscosity. Canada In the Canadian oil sands, bitumen produced by surface mining is generally upgraded on-site and delivered as synthetic crude oil. This makes delivery of oil to market through conventional oil pipelines quite easy. On the other hand, bitumen produced by the in-situ projects is generally not upgraded but delivered to market in raw form. If the agent used to upgrade the bitumen to synthetic crude is not produced on site, it must be sourced elsewhere and transported to the site of upgrading. If the upgraded crude is being transported from the site by pipeline, and additional pipeline will be required to bring in sufficient upgrading agent. The costs of production of the upgrading agent, the pipeline to transport it and the cost to operate the pipeline must be calculated into the production cost of the synthetic crude. Upon reaching a refinery, the synthetic crude is processed and a significant portion of the upgrading agent will be removed during the refining process. It may be used for other fuel fractions, but the end result is that liquid fuel has to be piped to the upgrading facility simply to make the bitumen transportable by pipeline. If all costs are considered, synthetic crude production and transfer using bitumen and an upgrading agent may prove economically unsustainable. When the first oil sands plants were built over 50 years ago, most oil refineries in their market area were designed to handle light or medium crude oil with lower sulfur content than the 4–7% that is typically found in bitumen. The original oil sands upgraders were designed to produce a high-quality synthetic crude oil (SCO) with lower density and lower sulfur content. These are large, expensive plants which are much like heavy oil refineries. Research is currently being done on designing simpler upgraders which do not produce SCO but simply treat the bitumen to reduce its viscosity, allowing to be transported unblended like conventional heavy oil. Western Canadian Select, launched in 2004 as a new heavy oil stream, blended at the Husky Energy terminal in Hardisty, Alberta, is the largest crude oil stream coming from the Canadian oil sands and the benchmark for emerging heavy, high TAN (acidic) crudes.: 9 Western Canadian Select (WCS) is traded at Cushing, Oklahoma, a major oil supply hub connecting oil suppliers to the Gulf Coast, which has become the most significant trading hub for crude oil in North America. While its major component is bitumen, it also contains a combination of sweet synthetic and condensate diluents, and 25 existing streams of both conventional and unconventional oil making it a syndilbit—both a dilbit and a synbit.: 16 The first step in upgrading is vacuum distillation to separate the lighter fractions. After that, de-asphalting is used to separate the asphalt from the feedstock. Cracking is used to break the heavier hydrocarbon molecules down into simpler ones. Since cracking produces products which are rich in sulfur, desulfurization must be done to get the sulfur content below 0.5% and create sweet, light synthetic crude oil.In 2012, Alberta produced about 1,900,000 bbl/d (300,000 m3/d) of crude bitumen from its three major oil sands deposits, of which about 1,044,000 bbl/d (166,000 m3/d) was upgraded to lighter products and the rest sold as raw bitumen. The volume of both upgraded and non-upgraded bitumen is increasing yearly. Alberta has five oil sands upgraders producing a variety of products. These include: Suncor Energy can upgrade 440,000 bbl/d (70,000 m3/d) of bitumen to light sweet and medium sour synthetic crude oil (SCO), plus produce diesel fuel for its oil sands operations at the upgrader. Syncrude can upgrade 407,000 bbl/d (64,700 m3/d) of bitumen to sweet light SCO. Canadian Natural Resources Limited (CNRL) can upgrade 141,000 bbl/d (22,400 m3/d) of bitumen to sweet light SCO. Nexen, since 2013 wholly owned by China National Offshore Oil Corporation (CNOOC), can upgrade 72,000 bbl/d (11,400 m3/d) of bitumen to sweet light SCO. Shell Canada operates its Scotford Upgrader in combination with an oil refinery and chemical plant at Scotford, Alberta, near Edmonton. The complex can upgrade 255,000 bbl/d (40,500 m3/d) of bitumen to sweet and heavy SCO as well as a range of refinery and chemical products.Modernized and new large refineries such as are found in the Midwestern United States and on the Gulf Coast of the United States, as well as many in China, can handle upgrading heavy oil themselves, so their demand is for non-upgraded bitumen and extra-heavy oil rather than SCO. The main problem is that the feedstock would be too viscous to flow through pipelines, so unless it is delivered by tanker or rail car, it must be blended with diluent to enable it to flow. This requires mixing the crude bitumen with a lighter hydrocarbon diluent such as condensate from gas wells, pentanes and other light products from oil refineries or gas plants, or synthetic crude oil from oil sands upgraders to allow it to flow through pipelines to market. Typically, blended bitumen contains about 30% natural gas condensate or other diluents and 70% bitumen. Alternatively, bitumen can also be delivered to market by specially designed railway tank cars, tank trucks, liquid cargo barges, or ocean-going oil tankers. These do not necessarily require the bitumen be blended with diluent since the tanks can be heated to allow the oil to be pumped out. The demand for condensate for oil sands diluent is expected to be more than 750,000 bbl/d (119,000 m3/d) by 2020, double 2012 volumes. Since Western Canada only produces about 150,000 bbl/d (24,000 m3/d) of condensate, the supply was expected to become a major constraint on bitumen transport. However, the recent huge increase in US tight oil production has largely solved this problem, because much of the production is too light for US refinery use but ideal for diluting bitumen. The surplus American condensate and light oil is being exported to Canada and blended with bitumen, and then re-imported to the US as feedstock for refineries. Since the diluent is simply exported and then immediately re-imported, it is not subject to the US ban on exports of crude oil. Once it is back in the US, refineries separate the diluent and re-export it to Canada, which again bypasses US crude oil export laws since it is now a refinery product. To aid in this process, Kinder Morgan Energy Partners is reversing its Cochin Pipeline, which used to carry propane from Edmonton to Chicago, to transport 95,000 bbl/d (15,100 m3/d) of condensate from Chicago to Edmonton by mid-2014; and Enbridge is considering the expansion of its Southern Lights pipeline, which currently ships 180,000 bbl/d (29,000 m3/d) of diluent from the Chicago area to Edmonton, by adding another 100,000 bbl/d (16,000 m3/d). Venezuela Although Venezuelan extra-heavy oil is less viscous than Canadian bitumen, much of the difference is due to temperature. Once the oil comes out of the ground and cools, it has the same difficulty in that it is too viscous to flow through pipelines. Venezuela is now producing more extra heavy crude in the Orinoco oil sands than its four upgraders, which were built by foreign oil companies over a decade ago, can handle. The upgraders have a combined capacity of 630,000 bbl/d (100,000 m3/d), which is only half of its production of extra-heavy oil. In addition Venezuela produces insufficient volumes of naphtha to use as diluent to move extra-heavy oil to market. Unlike Canada, Venezuela does not produce much natural gas condensate from its own gas wells, nor does it have easy access to condensate from new US shale gas production. Since Venezuela also has insufficient refinery capacity to supply its domestic market, supplies of naptha are insufficient to use as pipeline diluent, and it is having to import naptha to fill the gap. Since Venezuela also has financial problems—as a result of the country's economic crisis—and political disagreements with the US government and oil companies, the situation remains unresolved. Refining Heavy crude feedstock needs pre-processing before it is fit for conventional refineries, although heavy oil and bitumen refineries can do the pre-processing themselves. This pre-processing is called "upgrading", the key components of which are as follows: removal of water, sand, physical waste, and lighter products catalytic purification by hydrodemetallisation (HDM), hydrodesulfurization (HDS) and hydrodenitrogenation (HDN) hydrogenation through carbon rejection or catalytic hydrocracking (HCR)As carbon rejection is very inefficient and wasteful in most cases, catalytic hydrocracking is preferred in most cases. All these processes take large amounts of energy and water, while emitting more carbon dioxide than conventional oil. Catalytic purification and hydrocracking are together known as hydroprocessing. The big challenge in hydroprocessing is to deal with the impurities found in heavy crude, as they poison the catalysts over time. Many efforts have been made to deal with this to ensure high activity and long life of a catalyst. Catalyst materials and pore size distributions are key parameters that need to be optimized to deal with this challenge and varies from place to place, depending on the kind of feedstock present. Canada There are four major oil refineries in Alberta which supply most of Western Canada with petroleum products, but as of 2012 these processed less than 1/4 of the approximately 1,900,000 bbl/d (300,000 m3/d) of bitumen and SCO produced in Alberta. Some of the large oil sands upgraders also produced diesel fuel as part of their operations. Some of the oil sands bitumen and SCO went to refineries in other provinces, but most of it was exported to the United States. The four major Alberta refineries are: Suncor Energy operates the Petro-Canada refinery near Edmonton, which can process 142,000 bbl/d (22,600 m3/d) of all types of oil and bitumen into all types of products. Imperial Oil operates the Strathcona Refinery near Edmonton, which can process 187,200 bbl/d (29,760 m3/d) of SCO and conventional oil into all types of products. Shell Canada operates the Scotford Refinery near Edmonton, which is integrated with the Scotford Upgrader, and which can process 100,000 bbl/d (16,000 m3/d) of all types of oil and bitumen into all types of products. Husky Energy, operates the Husky Lloydminster Refinery in Lloydminster , which can process 28,300 bbl/d (4,500 m3/d) of feedstock from the adjacent Husky Upgrader into asphalt and other products.The $8.5 billion Sturgeon Refinery, a fifth major Alberta refinery, is under construction near Fort Saskatchewan with a completion date of 2017.The Pacific Future Energy project proposed a new refinery in British Columbia that would process bitumen into fuel for Asian and Canadian markets. Pacific Future Energy proposes to transport near-solid bitumen to the refinery using railway tank cars.Most of the Canadian oil refining industry is foreign-owned. Canadian refineries can process only about 25% of the oil produced in Canada. Canadian refineries, outside of Alberta and Saskatchewan, were originally built for light and medium crude oil. With new oil sands production coming on production at lower prices than international oil, market price imbalances have ruined the economics of refineries which could not process it. United States Prior to 2013, when China surpassed it, the United States was the largest oil importer in the world. Unlike Canada, the US has hundreds of oil refineries, many of which have been modified to process heavy oil as US production of light and medium oil declined. The main market for Canadian bitumen as well as Venezuelan extra-heavy oil was assumed to be the US. The United States has historically been Canada's largest customer for crude oil and products, particularly in recent years. American imports of oil and products from Canada grew from 450,000 bbl/d (72,000 m3/d) in 1981 to 3,120,000 bbl/d (496,000 m3/d) in 2013 as Canada's oil sands produced more and more oil, while in the US, domestic production and imports from other countries declined. However, this relationship is becoming strained due to physical, economic and political influences. Export pipeline capacity is approaching its limits; Canadian oil is selling at a discount to world market prices; US demand for crude oil and product imports has declined because of US economic problems; and US oil domestic unconventional oil production (shale oil production from fracking is growing rapidly. The US resumed export of crude oil in 2016; as of early 2019, the US produced as much oil as it consumed, with shale oil displacing Canadian imports. For the benefit of oil marketers, in 2004 Western Canadian producers created a new benchmark crude oil called Western Canadian Select, (WCS), a bitumen-derived heavy crude oil blend that is similar in its transportation and refining characteristics to California, Mexico Maya, or Venezuela heavy crude oils. This heavy oil has an API gravity of 19–21 and despite containing large amounts of bitumen and synthetic crude oil, flows through pipelines well and is classified as "conventional heavy oil" by governments. There are several hundred thousand barrels per day of this blend being imported into the US, in addition to larger amounts of crude bitumen and synthetic crude oil (SCO) from the oil sands. The demand from US refineries is increasingly for non-upgraded bitumen rather than SCO. The Canadian National Energy Board (NEB) expects SCO volumes to double to around 1,900,000 bbl/d (300,000 m3/d) by 2035, but not keep pace with the total increase in bitumen production. It projects that the portion of oil sands production that is upgraded to SCO to decline from 49% in 2010 to 37% in 2035. This implies that over 3,200,000 bbl/d (510,000 m3/d) of bitumen will have to be blended with diluent for delivery to market. Asia Demand for oil in Asia has been growing much faster than in North America or Europe. In 2013, China replaced the United States as the world's largest importer of crude oil, and its demand continues to grow much faster than its production. The main impediment to Canadian exports to Asia is pipeline capacity – The only pipeline capable of delivering oil sands production to Canada's Pacific Coast is the Trans Mountain Pipeline from Edmonton to Vancouver, which is now operating at its capacity of 300,000 bbl/d (48,000 m3/d) supplying refineries in B.C. and Washington State. However, once complete, the Northern Gateway pipeline and the Trans Mountain expansion currently undergoing government review are expected to deliver an additional 500,000 bbl/d (79,000 m3/d) to 1,100,000 bbl/d (170,000 m3/d) to tankers on the Pacific coast, from where they could deliver it anywhere in the world. There is sufficient heavy oil refinery capacity in China and India to refine the additional Canadian volume, possibly with some modifications to the refineries. In recent years, Chinese oil companies such as China Petrochemical Corporation (Sinopec), China National Offshore Oil Corporation (CNOOC), and PetroChina have bought over $30 billion in assets in Canadian oil sands projects, so they would probably like to export some of their newly acquired oil to China. Economics The world's largest deposits of bitumen are in Canada, although Venezuela's deposits of extra-heavy crude oil are even bigger. Canada has vast energy resources of all types and its oil and natural gas resource base would be large enough to meet Canadian needs for generations if demand was sustained. Abundant hydroelectric resources account for the majority of Canada's electricity production and very little electricity is produced from oil. The National Energy Board (NEB) reported in 2013, that if oil prices are above $100, Canada would have more than enough energy to meet its growing needs. The excess oil production from the oil sands could be exported. The major importing country would probably continue to be the United States, although before the developments in 2014, there was increasing demand for oil, particularly heavy oil, from Asian countries such as China and India.Canada has abundant resources of bitumen and crude oil, with an estimated remaining ultimate resource potential of 54 billion cubic metres (340 billion barrels). Of this, oil sands bitumen accounts for 90 per cent. Alberta currently accounts for all of Canada's bitumen resources. "Resources" become "reserves" only after it is proven that economic recovery can be achieved. At 2013 prices using current technology, Canada had remaining oil reserves of 27 billion m3 (170 billion bbls), with 98% of this attributed to oil sands bitumen. This put its reserves in third place in the world behind Venezuela and Saudi Arabia. At the much lower prices of 2015, the reserves are much smaller. Costs The costs of production and transportation of saleable petroleum from oil sands is typically significantly higher than from conventional global sources. Hence the economic viability of oil sands production is more vulnerable to the price of oil. The price of benchmark West Texas Intermediate (WTI) oil at Cushing, Oklahoma above US$100/bbl that prevailed until late 2014 was sufficient to promote active growth in oil sands production. Major Canadian oil companies had announced expansion plans and foreign companies were investing significant amounts of capital, in many cases forming partnerships with Canadian companies. Investment had been shifting towards in-situ steam-assisted gravity drainage (SAGD) projects and away from mining and upgrading projects, as oil sands operators foresee better opportunities from selling bitumen and heavy oil directly to refineries than from upgrading it to synthetic crude oil. Cost estimates for Canada include the effects of the mining when the mines are returned to the environment in "as good as or better than original condition". Cleanup of the end products of consumption are the responsibility of the consuming jurisdictions, which are mostly in provinces or countries other than the producing one. The Alberta government estimated that in 2012, the supply cost of oil sands new mining operations was $70 to $85 per barrel, whereas the cost of new SAGD projects was $50 to $80 per barrel. These costs included capital and operating costs, royalties and taxes, plus a reasonable profit to the investors. Since the price of WTI rose to $100/bbl beginning in 2011, production from oil sands was then expected to be highly profitable assuming the product could be delivered to markets. The main market was the huge refinery complexes on the US Gulf Coast, which are generally capable of processing Canadian bitumen and Venezuelan extra-heavy oil without upgrading. The Canadian Energy Research Institute (CERI) performed an analysis, estimating that in 2012 the average plant gate costs (including 10% profit margin, but excluding blending and transport) of primary recovery was $30.32/bbl, of SAGD was $47.57/bbl, of mining and upgrading was $99.02/bbl, and of mining without upgrading was $68.30/bbl. Thus, all types of oil sands projects except new mining projects with integrated upgraders were expected to be consistently profitable from 2011 onward, provided that global oil prices remained favourable. Since the larger and more sophisticated refineries preferred to buy raw bitumen and heavy oil rather than synthetic crude oil, new oil sands projects avoided the costs of building new upgraders. Although primary recovery such as is done in Venezuela is cheaper than SAGD, it only recovers about 10% of the oil in place versus 60% or more for SAGD and over 99% for mining. Canadian oil companies were in a more competitive market and had access to more capital than in Venezuela, and preferred to spend that extra money on SAGD or mining to recover more oil. Then in late 2014 the dramatic rise in U.S. production from shale formations, combined with a global economic malaise that reduced demand, caused the price of WTI to drop below $50, where it remained as of late 2015. In 2015, the Canadian Energy Research Institute (CERI) re-estimated the average plant gate costs (again including 10% profit margin) of SAGD to be $58.65/bbl, and 70.18/bbl for mining without upgrading. Including costs of blending and transportation, the WTI equivalent supply costs for delivery to Cushing become US$80.06/bbl for SAGD projects, and $89.71/bbl for a standalone mine. In this economic environment, plans for further development of production from oil sands have been slowed or deferred, or even abandoned during construction. Production of synthetic crude from mining operations may continue at a loss because of the costs of shutdown and restart, as well as commitments to supply contracts. During the 2020 Russia–Saudi Arabia oil price war, the price of Canadian heavy crude dipped below $5 per barrel. Production forecasts Oil sands production forecasts released by the Canadian Association of Petroleum Producers (CAPP), the Alberta Energy Regulator (AER), and the Canadian Energy Research Institute (CERI) are comparable to National Energy Board (NEB) projections, in terms of total bitumen production. None of these forecasts take into account probable international constraints to be imposed on combustion of all hydrocarbons in order to limit global temperature rise, giving rise to a situation denoted by the term "carbon bubble". Ignoring such constraints, and also assuming that the price of oil recovers from its collapse in late 2014, the list of currently proposed projects, many of which are in the early planning stages, would suggest that by 2035 Canadian bitumen production could potentially reach as much as 1.3 million m3/d (8.3 million barrels per day) if most were to go ahead. Under the same assumptions, a more likely scenario is that by 2035, Canadian oil sands bitumen production would reach 800,000 m3/d (5.0 million barrels/day), 2.6 times the production for 2012. The majority of the growth would likely occur in the in-situ category, as in-situ projects usually have better economics than mining projects. Also, 80% of Canada's oil sands reserves are well-suited to in-situ extraction, versus 20% for mining methods. An additional assumption is that there would be sufficient pipeline infrastructure to deliver increased Canadian oil production to export markets. If this were a limiting factor, there could be impacts on Canadian crude oil prices, constraining future production growth. Another assumption is that US markets will continue to absorb increased Canadian exports. Rapid growth of tight oil production in the US, Canada's primary oil export market, has greatly reduced US reliance on imported crude. The potential for Canadian oil exports to alternative markets such as Asia is also uncertain. There are increasing political obstacles to building any new pipelines to deliver oil in Canada and the US. In November 2015, U.S. President Barack Obama rejected the proposal to build the Keystone XL pipeline from Alberta to Steele City, Nebraska. In the absence of new pipeline capacity, companies are increasingly shipping bitumen to US markets by railway, river barge, tanker, and other transportation methods. Other than ocean tankers, these alternatives are all more expensive than pipelines.A shortage of skilled workers in the Canadian oil sands developed during periods of rapid development of new projects. In the absence of other constraints on further development, the oil and gas industry would need to fill tens of thousands of job openings in the next few years as a result of industry activity levels as well as age-related attrition. In the longer term, under a scenario of higher oil and gas prices, the labor shortages would continue to get worse. A potential labor shortage can increase construction costs and slow the pace of oil sands development.The skilled worker shortage was much more severe in Venezuela because the government controlled oil company PDVSA fired most of its heavy oil experts after the Venezuelan general strike of 2002–03, and wound down the production of Orimulsion, which was the primary product from its oil sands. Following that, the government re-nationalized the Venezuelan oil industry and increased taxes on it. The result was that foreign companies left Venezuela, as did most of its elite heavy oil technical experts. In recent years, Venezuela's heavy oil production has been falling, and it has consistently been failing to meet its production targets. As of late 2015, development of new oil sand projects were deterred by the price of WTI below US$50, which is barely enough to support production from existing operations. Demand recovery was suppressed by economic problems that may continue indefinitely to bedevil both the European Community and China. Low-cost production by OPEC continued at maximum capacity, efficiency of production from U.S. shales continued to improve, and Russian exports were mandated even below cost of production, as their only source of hard currency. There is also the possibility that there will emerge an international agreement to introduce measures to constrain the combustion of hydrocarbons in an effort to limit global temperature rise to the nominal 2 °C that is consensually predicted to limit environmental harm to tolerable levels. Rapid technological progress is being made to reduce the cost of competing renewable sources of energy. Hence there is no consensus about when, if ever, oil prices paid to producers may substantially recover.A detailed academic study of the consequences for the producers of the various hydrocarbon fuels concluded in early 2015 that a third of global oil reserves, half of gas reserves and over 80% of current coal reserves should remain underground from 2010 to 2050 in order to meet the target of 2 °C. Hence continued exploration or development of reserves would be extraneous to needs. To meet the 2 °C target, strong measures would be needed to suppress demand, such as a substantial carbon tax leaving a lower price for the producers from a smaller market. The impact on producers in Canada would be far larger than in the U.S. Open-pit mining of natural bitumen in Canada would soon drop to negligible levels after 2020 in all scenarios considered because it is considerably less economic than other methods of production. Environmental issues In their 2011 commissioned report entitled "Prudent Development: Realizing the Potential of North America's Abundant Natural Gas and Oil Resources," the National Petroleum Council, an advisory committee to the U.S. Secretary of Energy, acknowledged health and safety concerns regarding the oil sands which include "volumes of water needed to generate issues of water sourcing; removal of overburden for surface mining can fragment wildlife habitat and increase the risk of soil erosion or surface run-off events to nearby water systems; GHG and other air emissions from production."Oil sands extraction can affect the land when the bitumen is initially mined, water resources by its requirement for large quantities of water during separation of the oil and sand, and the air due to the release of carbon dioxide and other emissions. Heavy metals such as vanadium, nickel, lead, cobalt, mercury, chromium, cadmium, arsenic, selenium, copper, manganese, iron and zinc are naturally present in oil sands and may be concentrated by the extraction process. The environmental impact caused by oil sand extraction is frequently criticized by environmental groups such as Greenpeace, Climate Reality Project, Pembina Institute, 350.org, MoveOn.org, League of Conservation Voters, Patagonia, Sierra Club, and Energy Action Coalition. In particular, mercury contamination has been found around oil sands production in Alberta, Canada. The European Union has indicated that it may vote to label oil sands oil as "highly polluting". Although oil sands exports to Europe are minimal, the issue has caused friction between the EU and Canada. According to the California-based Jacobs Consultancy, the European Union used inaccurate and incomplete data in assigning a high greenhouse gas rating to gasoline derived from Alberta's oilsands. Also, Iran, Saudi Arabia, Nigeria and Russia do not provide data on how much natural gas is released via flaring or venting in the oil extraction process. The Jacobs report pointed out that extra carbon emissions from oil-sand crude are 12 percent higher than from regular crude, although it was assigned a GHG rating 22% above the conventional benchmark by EU.In 2014 results of a study published in the Proceedings of the National Academy of Sciences showed that official reports on emissions were not high enough. Report authors noted that, "emissions of organic substances with potential toxicity to humans and the environment are a major concern surrounding the rapid industrial development in the Athabasca oil sands region (AOSR)." This study found that tailings ponds were an indirect pathway transporting uncontrolled releases of evaporative emissions of three representative polycyclic aromatic hydrocarbon (PAH)s (phenanthrene, pyrene, and benzo(a)pyrene) and that these emissions had been previously unreported. Air pollution management The Alberta government computes an Air Quality Health Index (AQHI) from sensors in five communities in the oil sands region, operated by a "partner" called the Wood Buffalo Environmental Association (WBEA). Each of their 17 continuously monitoring stations measure 3 to 10 air quality parameters among carbon monoxide (CO), hydrogen sulfide (H2S), total reduced sulfur (TRS), Ammonia (NH3), nitric oxide (NO), nitrogen dioxide (NO2), nitrogen oxides (NOx), ozone (O3), particulate matter (PM2.5), sulfur dioxide (SO2), total hydrocarbons (THC), and methane/non-methane hydrocarbons (CH4/NMHC). These AQHI are said to indicate "low risk" air quality more than 95% of the time. Prior to 2012, air monitoring showed significant increases in exceedances of hydrogen sulfide (H2S) both in the Fort McMurray area and near the oil sands upgraders. In 2007, the Alberta government issued an environmental protection order to Suncor in response to numerous occasions when ground level concentration for H2S) exceeded standards. The Alberta Ambient Air Data Management System (AAADMS) of the Clean Air Strategic Alliance (aka CASA Data Warehouse) records that, during the year ending on 1 November 2015, there were 6 hourly reports of values exceeding the limit of 10 ppb for H2S, and 4 in 2013, down from 11 in 2014, and 73 in 2012.In September 2015, the Pembina Institute published a brief report about "a recent surge of odour and air quality concerns in northern Alberta associated with the expansion of oilsands development", contrasting the responses to these concerns in Peace River and Fort McKay. In Fort McKay, air quality is actively addressed by stakeholders represented in the WBEA, whereas the Peace River community must rely on the response of the Alberta Energy Regulator. In an effort to identify the sources of the noxious odours in the Fort McKay community, a Fort McKay Air Quality Index was established, extending the provincial Air Quality Health Index to include possible contributors to the problem: SO2, TRS, and THC. Despite these advantages, more progress was made in remediating the odour problems in the Peace River community, although only after some families had already abandoned their homes. The odour concerns in Fort McKay were reported to remain unresolved. Land use and waste management A large part of oil sands mining operations involves clearing trees and brush from a site and removing the overburden—topsoil, muskeg, sand, clay and gravel—that sits atop the oil sands deposit. Approximately 2.5 tons of oil sands are needed to produce one barrel of oil (roughly 1⁄8 of a ton). As a condition of licensing, projects are required to implement a reclamation plan. The mining industry asserts that the boreal forest will eventually colonize the reclaimed lands, but their operations are massive and work on long-term timeframes. As of 2013, about 715 square kilometres (276 sq mi) of land in the oil sands region have been disturbed, and 72 km2 (28 sq mi) of that land is under reclamation. In March 2008, Alberta issued the first-ever oil sands land reclamation certificate to Syncrude for the 1.04 square kilometres (0.40 sq mi) parcel of land known as Gateway Hill approximately 35 kilometres (22 mi) north of Fort McMurray. Several reclamation certificate applications for oil sands projects are expected within the next 10 years. Water management Between 2 and 4.5 volume units of water are used to produce each volume unit of synthetic crude oil in an ex-situ mining operation. According to Greenpeace, the Canadian oil sands operations use 349×10^6 m3/a (12.3×10^9 cu ft/a) of water, twice the amount of water used by the city of Calgary. However, in SAGD operations, 90–95% of the water is recycled and only about 0.2 volume units of water is used per volume unit of bitumen produced.For the Athabasca oil sand operations water is supplied from the Athabasca River, the ninth longest river in Canada. The average flow just downstream of Fort McMurray is 633 m3/s (22,400 cu ft/s) with its highest daily average measuring 1,200 m3/s (42,000 cu ft/s). Oil sands industries water license allocations totals about 1.8% of the Athabasca river flow. Actual use in 2006 was about 0.4%. In addition, according to the Water Management Framework for the Lower Athabasca River, during periods of low river flow water consumption from the Athabasca River is limited to 1.3% of annual average flow.In December 2010, the Oil Sands Advisory Panel, commissioned by former environment minister Jim Prentice, found that the system in place for monitoring water quality in the region, including work by the Regional Aquatic Monitoring Program, the Alberta Water Research Institute, the Cumulative Environmental Management Association and others, was piecemeal and should become more comprehensive and coordinated. Greenhouse gas emissions The production of bitumen and synthetic crude oil emits more greenhouse gases than the production of conventional crude oil. A 2009 study by the consulting firm IHS CERA estimated that production from Canada's oil sands emits "about 5% to 15% more carbon dioxide, over the "well-to-wheels" (WTW) lifetime analysis of the fuel, than average crude oil." Author and investigative journalist David Strahan that same year stated that IEA figures show that carbon dioxide emissions from the oil sands are 20% higher than average emissions from the petroleum production.A Stanford University study commissioned by the EU in 2011 found that oil sands crude was as much as 22% more carbon-intensive than other fuels.Greenpeace says the oil sands industry has been identified as the largest contributor to greenhouse gas emissions growth in Canada, as it accounts for 40 million tons of CO2 emissions per year.According to the Canadian Association of Petroleum Producers and Environment Canada the industrial activity undertaken to produce oil sands make up about 5% of Canada's greenhouse gas emissions, or 0.1% of global greenhouse gas emissions. It predicts the oil sands will grow to make up 8% of Canada's greenhouse gas emissions by 2015. While the production industrial activity emissions per barrel of bitumen produced decreased 26% over the decade 1992–2002, total emissions from production activity were expected to increase due to higher production levels. As of 2006, to produce one barrel of oil from the oil sands released almost 75 kilograms (165 lb) of greenhouse gases with total emissions estimated to be 67 megatonnes (66,000,000 long tons; 74,000,000 short tons) per year by 2015. A study by IHS CERA found that fuels made from Canadian oil sands resulted in significantly lower greenhouse gas emissions than many commonly cited estimates. A 2012 study by Swart and Weaver estimated that if only the economically viable reserve of 170 Gbbl (27×10^9 m3) oil sands was burnt, the global mean temperature would increase by 0.02 to 0.05 °C. If the entire oil-in-place of 1.8 trillion barrels were to be burnt, the predicted global mean temperature increase is 0.24 to 0.50 °C. Bergerson et al. found that while the WTW emissions can be higher than crude oil, the lower emitting oil sands cases can outperform higher emitting conventional crude cases.To offset greenhouse gas emissions from the oil sands and elsewhere in Alberta, sequestering carbon dioxide emissions inside depleted oil and gas reservoirs has been proposed. This technology is inherited from enhanced oil recovery methods. In July 2008, the Alberta government announced a C$2 billion fund to support sequestration projects in Alberta power plants and oil sands extraction and upgrading facilities.In November 2014, Fatih Birol, the chief economist of the International Energy Agency, described additional greenhouse gas emissions from Canada's oil sands as "extremely low". The IEA forecasts that in the next 25 years oil sands production in Canada will increase by more than 3 million barrels per day (480,000 m3/d), but Dr. Birol said "the emissions of this additional production is equal to only 23 hours of emissions of China — not even one day." The IEA is charged with responsibility for battling climate change, but Dr. Birol said he spends little time worrying about carbon emissions from oil sands. "There is a lot of discussion on oil sands projects in Canada and the United States and other parts of the world, but to be frank, the additional CO2 emissions coming from the oil sands is extremely low." Dr. Birol acknowledged that there is tremendous difference of opinion on the course of action regarding climate change, but added, "I hope all these reactions are based on scientific facts and sound analysis."In 2014, the U.S. Congressional Research Service published a report in preparation for the decision about permitting construction of the Keystone XL pipeline. The report states in part: "Canadian oil sands crudes are generally more GHG emission-intensive than other crudes they may displace in U.S. refineries, and emit an estimated 17% more GHGs on a life-cycle basis than the average barrel of crude oil refined in the United States".According to Natural Resources Canada (NRCan), by 2017, the 23 percent increase in GHG emissions in Canada from 2005 to 2017, was "largely from increased oil sands production, particularly in-situ extraction". Aquatic life deformities There is conflicting research on the effects of the oil sands development on aquatic life. In 2007, Environment Canada completed a study that shows high deformity rates in fish embryos exposed to the oil sands. David W. Schindler, a limnologist from the University of Alberta, co-authored a study on Alberta's oil sands' contribution of aromatic polycyclic compounds, some of which are known carcinogens, to the Athabasca River and its tributaries. Scientists, local doctors, and residents supported a letter sent to the Prime Minister in September 2010 calling for an independent study of Lake Athabasca (which is downstream of the oil sands) to be initiated due to the rise of deformities and tumors found in fish caught there.The bulk of the research that defends the oil sands development is done by the Regional Aquatics Monitoring Program (RAMP), whose steering committee is composed largely of oil and gas companies. RAMP studies show that deformity rates are normal compared to historical data and the deformity rates in rivers upstream of the oil sands. Public health impacts In 2007, it was suggested that wildlife has been negatively affected by the oil sands; for instance, moose were found in a 2006 study to have as high as 453 times the acceptable levels of arsenic in their systems, though later studies lowered this to 17 to 33 times the acceptable level (although below international thresholds for consumption).Concerns have been raised concerning the negative impacts that the oil sands have on public health, including higher than normal rates of cancer among residents of Fort Chipewyan. However, John O'Connor, the doctor who initially reported the higher cancer rates and linked them to the oil sands development, was subsequently investigated by the Alberta College of Physicians and Surgeons. The College later reported that O'Connor's statements consisted of "mistruths, inaccuracies and unconfirmed information".In 2010, the Royal Society of Canada released a report stating that "there is currently no credible evidence of environmental contaminant exposures from oil sands reaching Fort Chipewyan at levels expected to cause elevated human cancer rates."In August 2011, the Alberta government initiated a provincial health study to examine whether a link exists between the higher rates of cancer and the oil sands emissions.In a report released in 2014, Alberta's Chief Medical Officer of Health, Dr. James Talbot, stated that "There isn't strong evidence for an association between any of these cancers and environmental exposure [to oil sands]." Rather, Talbot suggested that the cancer rates at Fort Chipewyan, which were slightly higher compared with the provincial average, were likely due to a combination of factors such as high rates of smoking, obesity, diabetes, and alcoholism as well as poor levels of vaccination. See also Notes References Further reading External links Oil Sands Discovery Centre, Fort McMurray, Alberta, Canada Edward Burtynsky, An aerial look at the Alberta Tar Sands Archived 8 March 2009 at the Wayback Machine G.R. Gray, R. Luhning: Bitumen The Canadian Encyclopedia Jiri Rezac, Alberta Oilsands photo story and aerials Exploring the Alberta tar sands, Citizenshift, National Film Board of Canada Indigenous Groups Lead Struggle Against Canada's Tar Sands – video report by Democracy Now! Extraction of vanadium from oil sands Hoffman, Carl (1 October 2009). "New Tech to Tap North America's Vast Oil Reserves". Popular Mechanics. Canadian Oil Sands: Life-Cycle Assessments of Greenhouse Gas Emissions Congressional Research Service Alberta Government Oil Sands Information Portal Interactive Map and Data Library
agricultural emissions research levy
The agricultural emissions research levy was a controversial tax proposal in New Zealand. It was first proposed in 2003 and would collect an estimated $8.4 million annually from livestock farmers (out of an estimated annual $50–125 million in costs to the public which is caused by farm animals' emissions of greenhouse gases such as methane), and which would have been used to fund research on the livestock industry's emissions of greenhouse gases, to further the nation's compliance with the Kyoto Protocol. History In May 2003 a report prepared for the Ministry of Agriculture and Fisheries (O'Hara report) identified that although some funding for agricultural emissions was being provided by FRST and MAF, "The level of investment in abatement research by other public and private sources has been low". The report assessed that a minimum of $4.5 million (optimally $8.4 million) of additional funding would be needed to fund the recommended research program. In 2003, the tax was opposed by MP's of the ACT Party and the National Party. but eventually they proposed an alternative solution, as described below. Shane Ardern, a National Party MP, drove a tractor up the steps of Parliament as part of a protest against the tax. In 2004, a consortium of the livestock industry agreed to pay for a portion of this research (just not via taxation), and the government reserved the right to reconsider the tax if they or the industry withdrew from the agreement.In New Zealand, farm animals account for approximately 50% of the greenhouse gas emissions, according to two official estimates, and the Kyoto treaty may compel New Zealand to pay penalties if gas levels are not brought down. Research shows that the world's livestock produce are a significant contributor to global emissions (NZ exports a significant degree of its dairy and meat, as noted in Economy of New Zealand.) In 2004, whilst the Labour Party's coalition still led parliament, New Zealand's livestock farmers agreed to contribute to related scientific research, and to fund an unspecified portion of the costs of the Pastoral Greenhouse Gas Research Consortium.In September 2009, the National-led government announced that a push would be made for the formation of a Global Alliance to investigate methods of reducing greenhouse gas emissions due to agriculture. Simon Upton, a former National Party MP and Minister for the Environment, was appointed as a special envoy to liaise with other countries on the issue. Controversy The tax was described by livestock farmers and other critics as a "flatulence tax" or "fart tax" (though these nicknames are misleading, since most ruminant methane production is a product of the burping of methane produced by bacteria in the first stomach (the rumen) rather than of flatulence), and the president of the Federated Farmers contended that the government was trying to make the livestock industry pay for the "largesse" of others.In contrast, those who endorse such taxes contend that the result is that if one consumes a larger amount of the products which increase healthcare costs (in a system where citizens share each other's medical costs) – or those whose habits damage the environment, or if one's animals require antibiotics constantly to ameliorate disease-prone conditions, antibiotics which breed super-bugs that may also attack humans – then one would merely be paying for their own largesse, and the costs to society that their habits cause (and the opposition argues that one should pay more, commensurately, as one does or consumes more of what harms others in his society) (see also Pigovian tax). See also Climate change in New Zealand Agriculture in New Zealand Livestock's Long Shadow – Environmental Issues and Options; Climate change and agriculture: livestock References External links Agricultural Emissions Research Funding – discussion document Department of Chemistry, University of Otago – "Methane – and lots of hot air" (a Kiwi Professor of Chemistry's Flatulence humour)
world energy supply and consumption
World energy supply and consumption refers to the global primary energy production, energy conversion and trade, and final consumption of energy. Energy can be used in various different forms, as processed fuels or electricity, or for various different purposes, like for transportation or electricity generation. Energy production and consumption are an important part of the economy. This topic includes heat, but not energy from food. This article provides a brief overview of energy supply and consumption, using statistics summarized in tables, of the countries and regions that produce and consume the most energy. As of 2022, energy consumption is still about 80% from fossil fuels. The Gulf States and Russia are major energy exporters, with notable customers being the European Union and China, where domestically not enough energy is produced in order to satisfy energy demand. Energy consumption generally increases about 1-2% per year, except for solar and wind energy which averaged 20% per year in the 2010s.Energy that is produced, like from fossil fuels, is processed in order to make it suitable for consumption by end users. The energy supply chain from initial production and final consumption involves many different activities, causing a loss of useful energy ultimately, see exergy. Energy consumption per capita in North America is very high, while in less developed countries it is low and usually more renewable. There is a clear connection between energy consumption per capita, and GDP per capita. Due to the COVID-19 pandemic, there was a significant decline in energy usage worldwide in 2020, but total energy demand worldwide had recovered by 2021, and has hit a record high in 2022.A serious problem concerning energy production and consumption is greenhouse gas emissions. Of about 50 billion tonnes worldwide annual total greenhouse gas emissions, 36 billion tonnes of carbon dioxide was emitted due to energy (almost all from fossil fuels) in 2021. The goal set in the Paris Agreement to limit climate change will be difficult to achieve. Many scenarios have been envisioned to reduce greenhouse gas emissions, usually by the name of net zero by 2050. Availability of data Many countries publish statistics on the energy supply and consumption of either their own country, of other countries of interest, or of all countries combined in one chart. One of the largest organizations in this field, the International Energy Agency (IEA), sells yearly comprehensive energy data which makes this data paywalled and difficult to access for internet users. The organization Enerdata on the other hand publishes a free Yearbook, making the data more accessible. Another trustworthy organization that provides accurate energy data, mainly referring to the USA, is the U.S. Energy Information Administration. Primary energy production This is the worldwide production of energy, extracted or captured directly from natural sources. In energy statistics, primary energy (PE) refers to the first stage where energy enters the supply chain before any further conversion or transformation process. Energy production is usually classified as: Fossil, using coal, crude oil, and natural gas; Nuclear, using uranium; Renewable, using biomass, geothermal, hydropower, solar, wind, tidal, wave, among others.Primary energy assessment by IEA follows certain rules to ease measurement of different kinds of energy. These rules are controversial. Water and air flow energy that drives hydro and wind turbines, and sunlight that powers solar panels, are not taken as PE, which is set at the electric energy produced. But fossil and nuclear energy are set at the reaction heat, which is about three times the electric energy. This measurement difference can lead to underestimating the economic contribution of renewable energy.Enerdata displays: TOTAL ENERGY / PRODUCTION: Coal, Oil, Gas, Biomass, Heat and Electricity. RENEWABLES / % IN ELECTRICITY PRODUCTION: Renewables, non-renewables.The table lists worldwide PE and the countries producing most (76%) of that in 2021, using Enerdata. The amounts are rounded and given in million tonnes of oil equivalent per year (1 Mtoe = 11.63 TWh, 1 TWh = 109 kWh) and % of Total. Renewable is Biomass plus Heat plus renewable percentage of Electricity production (hydro, wind, solar). Nuclear is nonrenewable percentage of Electricity production. The above-mentioned underestimation of hydro, wind and solar energy, compared to nuclear and fossil energy, applies also to Enerdata. For more detailed energy production, see: List of countries by electricity production Nuclear power by country List of countries by oil production List of countries by natural gas production List of countries by coal production Energy conversion and trade Primary energy is converted in many ways to energy carriers, also known as secondary energy: Coal mainly goes to thermal power stations. Coke is derived by destructive distillation of bituminous coal. Crude oil goes mainly to oil refineries Natural-gas goes to natural-gas processing plants to remove contaminants such as water, carbon dioxide and hydrogen sulfide, and to adjust the heating value. It is used as fuel gas, also in thermal power stations. Nuclear reaction heat is used in thermal power stations. Biomass is used directly or converted to biofuel. Electricity generators are driven by steam or gas turbines in a thermal plant, or water turbines in a hydropower station, or wind turbines, usually in a wind farm.The invention of the solar cell in 1954 started electricity generation by solar panels, connected to a power inverter. Mass production of panels around the year 2000 made this economic. Much primary and converted energy is traded among countries. The table lists countries with large difference of export and import in 2021, expressed in Mtoe. A negative value indicates that much energy import is needed for the economy. Russian gas exports were reduced a lot in 2022, as pipelines to Asia plus LNG export capacity is much less than the gas no longer sent to Europe.Big transport goes by tanker ship, tank truck, LNG carrier, rail freight transport, pipeline and by electric power transmission. Total energy supply Total energy supply (TES) indicates the sum of production and imports subtracting exports and storage changes. For the whole world TES nearly equals primary energy PE because imports and exports cancel out, but for countries TES and PE differ in quantity, and also in quality as secondary energy is involved, e.g., import of an oil refinery product. TES is all energy required to supply energy for end users. The tables list TES and PE for some countries where these differ much, both in 2021 and TES history. Most growth of TES since 1990 occurred in Asia. The amounts are rounded and given in Mtoe. Enerdata labels TES as Total energy consumption.25% of worldwide primary production is used for conversion and transport, and 6% for non-energy products like lubricants, asphalt and petrochemicals. In 2019 TES was 606 EJ and final consumption was 418 EJ, 69% of TES. Most of the energy lost by conversion occurs in thermal electricity plants and the energy industry own use. Discussion about energy loss There are different qualities of energy. Heat, especially at a relatively low temperature, is low-quality energy, whereas electricity is high-quality energy. It takes around 3 kWh of heat to produce 1 kWh of electricity. But by the same token, a kilowatt-hour of this high-quality electricity can be used to pump several kilowatt-hours of heat into a building using a heat pump. And electricity can be used in many ways in which heat cannot. So the loss of energy incurred in thermal electricity plants is not comparable to a loss due to, say, resistance in power lines, because of quality difference. See Energy quality. In fact, the loss in thermal plants is due to poor conversion of chemical energy of fuel to electricity by combustion. Chemical energy of fuel is not low-quality because conversion to electricity in fuel cells can theoretically approach 100%. See Fuel_cell#Theoretical_maximum_efficiency. So energy loss in thermal plants is real loss. Final consumption Total final consumption (TFC) is the worldwide consumption of energy by end-users (whereas primary energy consumption (Eurostat) or total energy supply (IEA) is total energy demand and thus also includes what the energy sector uses itself and transformation and distribution losses). This energy consists of fuel (78%) and electricity (22%). The tables list amounts, expressed in million tonnes of oil equivalent per year (1 Mtoe = 11.63 TWh) and how much of these is renewable energy. Non-energy products are not considered here. The data are of 2018.Fuel: fossil: natural gas, fuel derived from petroleum (LPG, gasoline, kerosene, gas/diesel, fuel oil), from coal (anthracite, bituminous coal, coke, blast furnace gas). renewable: biofuel and fuel derived from waste. for District heating.The amounts are based on lower heating value. The first table lists final consumption in the countries/regions which use most (85%), and per person. In developing countries fuel consumption per person is low and more renewable. Canada, Venezuela and Brazil generate most electricity with hydropower. The world's renewable share of TFC was 18% in 2018: 7% traditional biomass, 3.6% hydropower and 7.4% other renewables.In Africa 32 of the 48 nations are declared to be in an energy crisis by the World Bank. See Energy in Africa. The next table shows countries consuming most (85%) in Europe. Trend In the period 2005–2017 worldwide final consumption of coal increased 23%, oil and gas increased 18%, electricity increased 41%. Energy for energy Some fuel and electricity is used to construct, maintain and demolish/recycle installations that produce fuel and electricity, such as oil platforms, uranium isotope separators and wind turbines. For these producers to be economical the ratio of energy returned on energy invested (EROEI) or energy return on investment (EROI) should be large enough. If the final energy delivered for consumption is E and the EROI equals R, then the net energy available is E-E/R. The percentage available energy is 100-100/R. For R>10 more than 90% is available but for R=2 only 50% and for R=1 none. This steep decline is known as the net energy cliff. Outlook IEA scenarios In World Energy Outlook 2023 the IEA notes that We are on track to see all fossil fuels peak before 2030.: 18 The IEA presents three scenarios.: 17 The Stated Policies Scenario (STEPS) provides an outlook based on the latest policy settings. The share of fossil fuel in global energy supply – stuck for decades around 80% – starts to edge downwards and reaches 73% by 2030.: 18  This undercuts the rationale for any increase in fossil fuel investment.: 19  Renewables are set to contribute 80% of new power capacity to 2030, with solar PV alone accounting for more than half.: 20  The STEPS sees a peak in energy-related CO2 emissions in the mid-2020s but emissions remain high enough to push up global average temperatures to around 2.4 °C in 2100.: 22  Total energy demand continues to increase through to 2050.: 23  Total energy investment remains at about USD 3 trillion per year.: 49 The Announced Pledges Scenario (APS) assumes all national energy and climate targets made by governments are met in full and on time. The APS is associated with a temperature rise of 1.7 °C in 2100 (with a 50% probability).: 92  Total energy investment rises to about USD 4 trillion per year after 2030.: 49 The Net Zero Emissions by 2050 (NZE) Scenario limits global warming to 1.5 °C.: 17  The share of fossil fuel reaches 62% in 2030.: 101  Methane emissions from fossil fuel supply cuts by 75% in 2030.: 45  Total energy investment rises to almost USD 5 trillion per year after 2030.: 49  Clean energy investment needs to rise everywhere, but the steepest increases are needed in emerging market and developing economies other than China, requiring enhanced international support.: 46  The share of electricity in final consumption exceeds 50% by 2050 in NZE. The share of nuclear power in electricity generation remains broadly stable over time in all scenarios, about 9%.: 106 UN Emissions Gap Report 2023 As temperature records tumble and climate impacts intensify, the Emissions Gap Report 2023: Broken Record – Temperatures hit new highs, yet world fails to cut emissions (again) finds that the world is heading for a 2.5-2.9°C temperature rise above pre-industrial levels unless countries step up action and deliver more than promised in their 2030 pledges under the Paris Agreement. Alternative scenarios Alternative scenarios for achieving the Paris Climate Agreement Goals are developed by a team of 20 scientists at the University of Technology of Sydney, the German Aerospace Center, and the University of Melbourne, using IEA data but proposing transition to nearly 100% renewables by mid-century, along with steps such as reforestation. Nuclear power and carbon capture are excluded in these scenarios. The researchers say the costs will be far less than the $5 trillion per year governments currently spend subsidizing the fossil fuel industries responsible for climate change.: ix In the +2.0 C (global warming) Scenario total primary energy demand in 2040 can be 450 EJ = 10755 Mtoe, or 400 EJ = 9560 Mtoe in the +1.5 Scenario, well below the current production. Renewable sources can increase their share to 300 EJ in the +2.0 C Scenario or 330 EJ in the +1.5 Scenario in 2040. In 2050 renewables can cover nearly all energy demand. Non-energy consumption will still include fossil fuels.: xxvii Fig. 5 Global electricity generation from renewable energy sources will reach 88% by 2040 and 100% by 2050 in the alternative scenarios. "New" renewables—mainly wind, solar and geothermal energy—will contribute 83% of the total electricity generated.: xxiv  The average annual investment required between 2015 and 2050, including costs for additional power plants to produce hydrogen and synthetic fuels and for plant replacement, will be around $1.4 trillion.: 182 Shifts from domestic aviation to rail and from road to rail are needed. Passenger car use must decrease in the OECD countries (but increase in developing world regions) after 2020. The passenger car use decline will be partly compensated by strong increase in public transport rail and bus systems.: xxii Fig.4 CO2 emission can reduce from 32 Gt in 2015 to 7 Gt (+2.0 Scenario) or 2.7 Gt (+1.5 Scenario) in 2040, and to zero in 2050.: xxviii See also Energy industry Environmental impact of the energy industry Notes References External links Enerdata - World Energy & Climate Statistics International Energy Outlook, by the U.S. Energy Information Administration World Energy Outlook from the IEA
pescetarianism
Pescetarianism ( PESK-ə-TAIR-ee-ə-niz-əm; sometimes spelled pescatarianism) is the practice of incorporating seafood into an otherwise vegetarian diet. Pescetarians may or may not consume other animal products such as eggs and dairy products. Approximately 3% of adults worldwide are pescetarian, according to 2017–2018 research conducted by data and analytics companies. Definition and etymology "Pescetarian" is a neologism formed as a portmanteau of the Italian word "pesce" ("fish") and the English word "vegetarian". The term was coined in the United States in the early 1990s. "Pesco-vegetarian" is a synonymous term that is seldom used outside of academic research, but it has sometimes appeared in other American publications and literature since at least 1980. History Early history The first vegetarians in written western history may have been the Pythagoreans, a title derived from the Greek philosopher Pythagoras. Though Pythagoras loaned his name to the meatless diet, some biographers suspect he may have eaten fish as well at some points, which would have made him not a vegetarian but a pescatarian by today's standards. Many of Pythagoras's philosophies inspired Plato, who advocated for the moral and nutritional superiority of vegetarian-oriented diets. In Plato's ideal republic, a healthy diet would consist of cereals, seeds, beans, fruit, milk, honey and fish.In 675, the consumption of livestock and wild animals was banned in Japan by Emperor Tenmu, due to the influence of Buddhism and the lack of arable land. However, Tenmu did not ban the consumption of deer or wild boar. pg 53-54. Subsequently, in the year 737 of the Nara period, the Emperor Seimu approved the eating of fish and shellfish. During the twelve hundred years from the Nara period to the Meiji Restoration in the latter half of the 19th century, Japanese people ate vegetarian-style meals, and on special occasions, seafood was served. Exceptions were wild fowl served amongst the Heian nobility, pg 73-74, and when Europeans arrived in Japan in the 15th century, the Japanese diet included boar meat Several orders of monks in medieval Europe restricted or banned the consumption of meat for ascetic reasons, but none of them abstained from the consumption of fish; these monks were not vegetarians, but some were pescetarians.Marcion of Sinope and his followers ate fish but no fowl or red meat. Fish was seen by the Marcionites as a holier kind of food. They consumed bread, fish, honey, milk, and vegetables.The "Hearers" of the ecclesiastical hierarchy of Manichæism lived on a diet of fish, grain, and vegetables. Consumption of land animals was forbidden, based on the Manichaean belief that "fish, being born in and of the waters, and without any sexual connexion on the part of other fishes, are free from the taint which pollutes all animals".The Rule of Saint Benedict insisted upon total abstinence of meat from four-footed animals, except in cases of the sick. Benedictine monks thus followed a diet based on vegetables, eggs, milk, butter, cheese, and fish. Paul the Deacon specified that cheese, eggs, and fish were part of a monk's ordinary diet. Benedictine monk Walafrid Strabo commented, "Some salt, bread, leeks, fish and wine; that is our menu."The Carthusians followed a strict diet that consisted of fish, cheese, eggs, and vegetables, with only bread and water on Fridays.In the 13th century, Cistercian monks consumed fish and eggs. Ponds were created for fish farming. From the early 14th century, Benedictine and Cistercian monks no longer abstained from consuming meat of four-footed animals. In 1336, Pope Benedict XII permitted monks to eat meat four days a week outside of the fast season if it was not served in the refectory.The anchorites of England ate a pescetarian diet of fish seasoned with apples and herbs, bean or pea soup and milk, butter and oil. 19th century to present Francis William Newman, who was President of the Vegetarian Society from 1873 to 1883, made an associate membership possible for people who were not completely vegetarian like pescetarians. Eventually in the 1890s Newman himself switched from following a ovo-lacto-vegetarian diet to a pescetarian diet, with the rationale that fish do not waste land space, are plentiful due to high reproduction rates, do not care for their young and have no parental feelings to violate and can be captured and slaughtered in ways that inflict minimal pain.A 2016 book Seagan Eating promoted a seafood diet, which is distinguished from ordinary pescetarian diets because it discourages consumption of dairy and eggs. Trends and demographics As of 2020, pescetarianism has been described as a plant-based diet. Regular fish consumption and decreased red meat consumption are recognized as dietary practices that may promote health. Pescetarianism has been shown to be more popular among women than men in all regions where the data on sex ratio is available. Global In 2018, Ipsos MORI reported 73% of people worldwide followed a diet where both meat and non-animal products were regularly consumed, with 14% considered as flexitarians, 5% vegetarians, 3% vegans, and 3% pescetarians. These are similar to the results collected by GlobalData just a year earlier; where 23% of the sample had below average meat consumption, 5% had vegetarian diets, 2% had vegan diets and 3% had pescetarian diets. Globally, pescetarian diets seem to have increased in popularity in the mid-to-late 2010s; only 40% of pescetarians surveyed had been adhering to the diet for more than a couple years and another 18% reported adhering to diet for about a year. United Kingdom A 2018 poll of 2,000 United Kingdom adults found that ≈12% of adults adhered to a meat-free diet; with 2% vegan, 6–7% ovo-lacto-vegetarian, and 4% pescetarian. Different studies and survey have found a more modest number of meat-abstainers; a 2021 survey found 10% of Brits were meat abstainers with 3% of the population being pescetarians.In Great Britain as of January 2019, women between 18 and 24 years of age were the most likely demographic group to follow a pescetarian diet. In general, men were less interested in pescetarianism, and men 35 years and above were the least likely to adhere to a pescetarian diet pattern. Other regions In 2018, one survey found that people in Africa and the Middle East had a high incidence of pescetarian diets (5%) when compared to other areas of the world. In Europe, the incidence of pescetarianism varied by country, according to a 2020 survey documenting the dietary practices of residents in seven European nations: on average, pescetarianism was about 3% of the EU population, with slightly higher incidence in Germany and Belgium. Motivations and rationale Sustainability and environmental concerns It is common for all kinds of meat-abstainers to participate in the "green movement" and be conscientious about global food sustainability and environmentalism; switching to a pescetarian dietary pattern can potentially positively affect both. People may adopt a pescetarian diet out of desire to lower their dietary carbon footprint. A 2014 lifecycle analysis of greenhouse gas emissions estimated that a pescetarian diet would provide a 45% reduction in emissions compared to an omnivorous diet. Research on the diets of over 55,000 UK residents found that meat-eaters had dietary greenhouse gas emissions that were about 50% higher than pescetarians. Compared to an omnivorous diet, pescetarian diets also had 64% less environmental impact overall when the amount greenhouse gas emissions, land use and cumulative energy demand were assessed together.A Japanese study in 2018 found that various diet changes could successfully reduce the Japanese food-nitrogen footprint, particularly by adopting a pescetarian diet which may reduce the impact on nitrogen. Switching from an omnivorous diet to a pescetarian diet also carries high potential in reducing American food loss because fish and shellfish contribute markedly less to food waste at the primary, retail and consumer levels than both red meat and poultry. Additionally, water conservation may be a motivator; a multinational study found that switching a conventional diet for a balanced pescetarian diet could reduce dietary water footprint by 33% to 55%. Health research A common reason for adoption of pescetarianism may be health-related, such as fish and plant food consumption as part of the Mediterranean diet, which is associated with lowered risk of cardiovascular diseases. Pescetarian diets are under preliminary research for their potential to affect diabetes, long-term weight gain, and all-cause mortality. Animal welfare concerns Pescetarianism may be perceived as a more ethical choice because fish and shellfish may not experience fear, pain, and suffering as more complex animals like mammals and other tetrapods do. However, this is an ongoing debate.Some pescetarians may regard their diet as a transition to vegetarianism, while others may consider it an ethical compromise, often as a practical necessity to obtain nutrients that are absent, not easily found, or lowly bioavailable in plants. Other considerations Concerns have been raised about consuming some fish varieties containing toxins such as mercury and polychlorinated biphenyls (PCB), although it is possible to select fish that contain little or no mercury and moderate the consumption of mercury-containing fish. According to a 2018 global consumer survey, the majority of pescetarians, vegetarians and vegans (87% prevalence) reported that their food product choices are influenced by ideological factors, like ethical concerns, environmental impact or social responsibility. Pescetarians may be motivated by ethical concerns that are not related to animal protection or environmental protection, such as humanitarian or religious reasons. Viable sources of protein that can be consumed by food-insecure humans are not wasted on filter feeders or wild-caught fish. Abstinence in religion Christianity In both the Roman Catholic and Eastern Orthodox tradition, pescetarianism is referred to as a form of abstinence. During fast periods, Eastern Orthodox Christians often abstain from meat, dairy, eggs, and fish, but on holidays that occur on fast days (for example, 15 August on a Wednesday or Friday), fish is allowed, while meat and dairy remain forbidden. Anthonian fasting has been considered a pescetarian-like variant of Orthodox fasting as poultry and red meat are restricted throughout the year but fish, eggs, oils, dairy and wine are allowed most days.Pescetarianism is relatively popular among Seven-day Adventists when compared to the general population; in the 2000s 10% of North American Seven-day Adventists who were surveyed reported adhering to a pescetarian diet. The higher popularity is likely due to the church promoting a "health message" to its followers and considering meat-consumption to be unfavorable. Adventists who eat seafood do not eat shellfish because the church expects all followers to only eat kosher foods deemed permissible by Leviticus 11. Judaism Pescetarianism (provided the fish is kosher) conforms to Jewish dietary laws. Fish and all other seafood animals must have both fins and scales to be considered kosher. Aquatic mammals such as dolphins and whales are not kosher, nor are cartilaginous fish such as sharks and rays, since they all have dermal denticles and not bony-fish scales. The lack of fins and scales also deems crustaceans (e.g. shrimp, crab, lobster) and molluscs (e.g. oyster, clam, conch, octopus, squid) to be "treif"—non-kosher. Roe, such as caviar, must come from a kosher fish to be permitted. Pescetarian diets simplify adherence to the Judaic separation of meat and dairy products, as kosher fish is "pareve"—neither "milk" nor "meat".In 2015, members of the Liberal Judaism synagogue in Manchester founded The Pescetarian Society, citing pescetarianism as originally a Jewish diet, and pescetarianism as a form of vegetarianism. The society has several advocacy interests; public health, promoting healthy eating, praising pescetarianism as "the natural human diet", supporting better animal welfare, bringing awareness to the climate change crisis and demanding seafood be sustainable and responsibly-caught. Hinduism Some Hindus, by choice, follow a strict lacto-vegetarian diet and in India up to 44% of Hindus self-identify as some type of vegetarian. However, there are Hindus who consume fish. They are mainly from coastal south-western India. This community regards seafood in general as "vegetables from the sea", and refrains from eating land-based animals. Other Hindus who consume seafood are those from Bengal, Odisha, and other coastal areas. In Bengal, Hindus consume fish and are known to cook it daily. Rastafari The expression of Ital eating can vary from Rasta to Rasta but a general principle is that food should be natural or pure, and from the earth. Though the Rastafari are generally associated with avid vegetarianism and veganism, a large minority of adherents do deem certain kinds of fish to be an acceptable exception in the Ital diet. Rastafari who permit fish will avoid eating all kinds of shellfish as they are considered to be "unclean" scavengers, a belief that stems from biblical teachings. See also Ikaria Study – Dietary study of long-lived Ikarian people found to have semi-vegetarian diets similar to pescetarianism. List of diets – A comprehensive index of diets covered on Wikipedia Mediterranean diet – Diet inspired by eating habits of the lands surrounding the Mediterranean Sea. Okinawa diet – Eating habits of the indigenous people of the Ryukyu Islands. Semi-vegetarianism – Other forms of semi-vegetarianism that include occasional seafood or meat consumption. == References ==
alarko holding
Alarko Holding is one of the largest business conglomerates in Turkey; it is listed on the Istanbul Stock Exchange. It operates in a variety of sectors, including construction, electricity generation and distribution, tourism, and real estate. It was founded by İshak Alaton and Üzeyir Garih in 1954. History As of 2014, it operates in the fields of contracting, energy, industry, tourism, aquaculture and real estate. In addition, Alarko Education and Culture Foundation (ALEV) was established in 1986 to take part in social responsibility projects within the Holding. Greenhouse gas emissions Climate Trace estimates Cenal coal-fired power plant emitted over 7 million tons of the country’s total 560 million tons of greenhouse gas in 2021. See also List of companies of Turkey References External links Official website (in English) Alarko Holding bloomberg.com Biography of İshak Alaton at Biyografi.net (in Turkish) Hansen, Suzy (January 2, 2009). "Eye of the storm". The National. Archived from the original on February 27, 2009. Retrieved September 13, 2009.
energy in norway
Norway is a large energy producer, and one of the world's largest exporters of oil. Most of the electricity in the country is produced by hydroelectricity. Norway is one of the leading countries in the electrification of its transport sector, with the largest fleet of electric vehicles per capita in the world (see plug-in electric vehicles in Norway and electric car use by country). Since the discovery of North Sea oil in Norwegian waters during the late 1960s, exports of oil and gas have become very important elements of the economy of Norway. With North Sea oil production having peaked, disagreements over exploration for oil in the Barents Sea, the prospect of exploration in the Arctic, as well as growing international concern over global warming, energy in Norway is currently receiving close attention. Statistics Energy plan In January 2008 the Norwegian government went a step further and declared a goal of being carbon neutral by 2030. But the government has not been specific about any plans to reduce emissions at home; the plan is based on buying carbon offsets from other countries. Fuel types Fossil fuels In 2011, Norway was the eighth largest crude oil exporter in the world (at 78 Mt), and the 9th largest exporter of refined oil (at 86 Mt). It was also the world's third largest natural gas exporter (at 99 bcm), having significant gas reserves in the North Sea. Norway also possesses some of the world's largest potentially exploitable coal reserves (located under the Norwegian continental shelf) on earth. More recently (2017), the Norwegian government has ranked 3rd worldwide as the largest exporter of natural gas, just behind Russia and Qatar.Norway's abundant energy resources represent a significant source of national revenue. Crude oil and natural gas accounted for 40% of the country's total export value in 2015. As a share of GDP, the export of oil and natural gas is approximately 17%. As a means to ensure security and mitigate against the "Dutch disease" characterized by fluctuations in the price of oil, the Norwegian government funnels a portion of this export revenue into a pension fund, the Government Pension Fund Global (GPFG). The Norwegian government receives these funds from their market shares within oil industries, such as their two-thirds share of Equinor, and allocates it through their government-controlled domestic economy. This combination allows the government to distribute the natural resource wealth into welfare investments for the mainland. Tying this fiscal policy to the oil market for equity concerns creates a cost-benefit economic solution towards a public access good problem in which a select few are able to reap the direct benefits of a public good. Domestically, Norway has addressed the complications that occur with oil industry markets in protecting the mainland economy and government intervention in distributing its revenue to combat balance-of-payment shocks and to address energy security.The externalities engendered from Norway's activities on the environment, pose another concern apart from its domestic economic implications. Most of Norwegian gas is exported to European countries. As of 2020, about 20% of natural gas consumed in Europe comes from Norway, and Norwegian oil supplies 2% of the global consumption of oil. Considering that three million barrels of oil adds 1.3 Mt of CO2 per day to the atmosphere as it is consumed, 474 Mt/year, the global CO2 impact of Norway's natural resource supply is significant. Despite that, Norway exports eight times the amount of energy consumed domestically, most of Norway's carbon emissions are from its oil and gas industry (30%) and road traffic (23%). To address the problem of CO2 emissions, the Norwegian government has adopted different measures, including signing multilateral and bilateral treaties to cut its emissions in lieu of rising global environmental concerns.According to a report from Norsk Petroleum, oil and petroleum is Norway's most crucial commodity export. In 2020, 40% of Norway's exports stemmed from the petroleum sector. This had an export value of 333 billion NOK. 2% of the world's oil consumption is produced by Norway making it the 15th largest oil producer in the world in 2019. Fossil fuels act as a major economic boost in Norway, meanwhile driving down the domestic energy costs. Fossil fuels operations in Norway is also a large source of Norwegian's employment. It has been argued that Norway can serve as a role model for many countries in terms of petroleum resource management. In Norway, good institutions and open and dynamic public debate involving a whole variety of civil society actors are key factors for successful petroleum governance.The International Energy Agency notes in a 2018 report that the fossil fuel industry in Norway may face various challenges in the future. New sources of energy and methods of production like Shale and Hydraulic fracturing (commonly known as fracking) may substitute oil and gas. Renewable Energy also poses a large risk to reducing fossil fuel production and deployment of new technologies. A new generation of people in the workforce may also lead oil producers to face backlash. Oil is facing a decline in price on the global market which plays a large role in global and European decarbonisation The diminishing consumption of oil is impending, but the speed and scale of the transition to renewable energy sources is debated. In this regard, peak demand of oil is a large topic of discussion for oil producers. Current and future prices of oil has a much larger effect on peak demand for oil producers than solely relying on sales volumes' immediate effect. Scholars also question when oil and gas will reach its peak demand, but an increasing amount of scholars are more concerned with what happens after the peak - whether there will be a plateau, gentle decline, or sudden collapseIncreasing competition among oil suppliers also poses as a challenge within the fossil fuel debate. The evident transition to renewable energy may cause suppliers to quickly secure the remaining supply of oil so their fossil fuel assets do not go unprofitable and undeveloped. The European Union's history of taxing oil products and carbon-intensive also supports the transition away from fossil fuels. Natural gas In the aftermath of the 2022 Nord Stream pipeline sabotage, Norway became the leading natural gas supplier to the European Union. According to Lukas Trakimavičius, an energy security expert from the Center for European Policy Analysis, there is a risk that hostile actors could try to negatively affect the European Union's natural gas security by targeting Norway's offshore gas infrastructure. Considering the size and remoteness of Norway’s subsea pipelines, attribution of such an attack could be very difficult. North Sea oil In May 1963, Norway asserted sovereign rights over natural resources in its sector of the North Sea. Exploration started on July 19, 1966, when Ocean Traveller drilled its first hole. Initial exploration was fruitless, until Ocean Viking found oil on August 21, 1969. By the end of 1969, it was clear that there were large oil and gas reserves in the North Sea. The first oil field was Ekofisk, which produced 427,442 barrels of crude in 1980. Subsequently, large natural gas reserves have also been discovered and it was specifically this huge amount of oil found in the North Sea that made Norway's separate path outside the EU facile.Against the backdrop of the 1972 Norwegian referendum to not join the European Union, the Norwegian Ministry of Industry, headed by Ola Skjåk Bræk moved quickly to establish a national energy policy. Norway decided to stay out of OPEC, keep its own energy prices in line with world markets, and spend the revenue—known as the "currency gift"—in the Petroleum Fund of Norway. The Norwegian government established its own oil company, Statoil (since renamed Equinor), which was raised that year, and awarded drilling and production rights to Norsk Hydro and the Saga Petroleum.The North Sea turned out to present many technical challenges for production and exploration, and Norwegian companies invested in building capabilities to meet these challenges. A number of engineering and construction companies emerged from the remnants of the largely lost shipbuilding industry, creating centers of competence in Stavanger and the western suburbs of Oslo. Stavanger also became the land-based staging area for the offshore drilling industry. Due to refinery needs when making special qualities of commercial oils, Norway imported NOK 3.5 billion of foreign oil in 2015. Barents Sea oil In March 2005, Minister of Foreign Affairs Jan Petersen stated that the Barents Sea, off the coast of Norway and Russia, may hold one third of the world's remaining undiscovered oil and gas. Also in 2005, the moratorium on exploration in the Norwegian sector, imposed in 2001 due to environmental concerns, was lifted following a change in government. A terminal and liquefied natural gas plant is now being constructed at Snøhvit, it is thought that Snøhvit may also act as a future staging post for oil exploration in the Arctic Ocean. Renewable energy Wind power In 2021, 64 wind farms had total installed wind power capacity of 4,649 MW with 706 MW of onshore power being added in 2021. Electricity produced in 2021 being 11.8 TWh or 8.5% of Norway's needs. Solar power In 2022 solar power had a capacity of 321 MW and produced around 0.3 TWh of electricity per annum. Hydroelectric power Norway is Europe's largest producer of hydropower. Tidal power Norway was the first country to generate electricity commercially using sea-bed tidal power. A 300 kilowatt prototype underwater turbine started generation in the Kvalsund, south of Hammerfest, on November 13, 2003. Electricity generation Electricity generation in Norway is almost entirely from hydroelectric power plants. Of the total production in 2005 of 137.8 TWh, 136 TWh was from hydroelectric plants, 0.86 TWh was from thermal power, and 0.5 TWh was wind generated. In 2005 the total consumption was 125.8 TWh.[1]Norway and Sweden's grids have long been connected. Beginning in 1977 the Norwegian and Danish grids were connected with the Skagerrak power transmission system with a transmission capacity of 500 MW, growing to 1,700 MW in 2015. Since 6 May 2008, the Norwegian and Dutch electricity grids have been interconnected by the NorNed submarine HVDC (450 kilovolts) cable with a capacity of 700 megawatts. Policies to curb carbon emissions Despite producing the majority of its electricity from hydroelectric plants, Norway is ranked 30th in the 2008 list of countries by carbon dioxide emissions per capita and 37th in the 2004 list of countries by ratio of GDP to carbon dioxide emissions. Norway is a signatory to the Kyoto Protocol, under which it agreed to reduce its carbon emissions to no more than 1% above 1990 levels by 2012. On April 19, 2007, Prime Minister Jens Stoltenberg announced to the Labour Party annual congress that Norway's greenhouse gas emissions would be cut by 10 percent more than its Kyoto commitment by 2012, and that the government had agreed to achieve emission cuts of 30% by 2020. He also proposed that Norway should become carbon neutral by 2050, and called upon other rich countries to do likewise. This carbon neutrality would be achieved partly by carbon offsetting, a proposal criticised by Greenpeace, who also called on Norway to take responsibility for the 500 m tonnes of emissions caused by its exports of oil and gas. World Wildlife Fund Norway also believes that the purchase of carbon offsets is unacceptable, saying "it is a political stillbirth to believe that China will quietly accept that Norway will buy climate quotas abroad". The Norwegian environmental activist Bellona Foundation believes that the prime minister was forced to act due to pressure from anti-European Union members of the coalition government, and called the announcement "visions without content".Globally, Norway set a clear agenda in terms of climate leadership and mitigating negative consequences stemming from climate change. In terms of climate goals, Norway, along with The Netherlands, has one of the most strict timelines to eliminate fossil fuels and reduction carbon emissions. However, The Federation of Norwegian Industries notes in a 2021 report that Norway is far from realising its goals concerning climate action and reducing emissions from carbon for both 2030 and 2050 Carbon capture and storage Norway was the first country to operate an industrial-scale carbon capture and storage project at the Sleipner oilfield, dating from 1996 and operated by Equinor. Carbon dioxide is stripped from natural gas with amine solvents and is deposited in a saline formation. The carbon dioxide is a waste product of the field's natural gas production; the gas contains 9% CO2, more than is allowed in the natural gas distribution network. Storing it underground avoids this problem and saves Equinor hundreds of millions of euros in carbon taxes. Sleipner stores about one million tonnes of CO2 a year.Large oil companies have invested in carbon capture and storage technology in Norway. The Northern Lights Project is the world's first network project within carbon capture and storage signed by Equinox, Shell, and Total totalling USD 675 million. Carbon tax Norway introduced a carbon tax on fuels in 1991. The tax started at a rate of US$51 per tonne of CO2 on gasoline, with an average tax of US$21 per tonne. The tax applied to diesel, mineral oil, oil and gas used in North Sea extraction activities. The International Energy Agency's (IEA) in 2001 stated that "since 1991 a carbon dioxide tax has applied in addition to excise taxes on fuel." It is among the highest rates in OECD. The applies to offshore oil and gas production. IEA estimates for revenue generated by the tax in 2004 were 7,808 million NOK (about US$1.3 billion in 2010 dollars). According to IEA's 2005 Review, Norway's CO2 tax is its most important climate policy instrument, and covers about 64% of Norwegian CO2 emissions and 52% of total greenhouse gas emissions. Some industry sectors were exempted to preserve their competitive position. Various studies in the 1990s, and an economic analysis by Statistics Norway, estimated the effect to be a reduction of 2.5–11% of Norwegian emissions compared to (untaxed) business-as-usual. However, Norway's per capita emissions still rose by 15% as of 2008.In attempt to reduce CO2 emissions by a larger amount, Norway implemented an Emissions Trading Scheme in 2005 and joined the European Union Emissions Trading Scheme (EU ETS) in 2008. As of 2013, roughly 55% of CO2 emissions in Norway were taxed and exempt emissions are included in the EU ETS. Certain CO2 taxes are applied to emissions that result from petroleum activities on the continental shelf. This tax is charged per liter of oil and natural gas liquids produced, as well as per standard cubic meter of gas flared or otherwise emitted. However, this carbon tax is a tax deductible operating cost for petroleum production. In 2013, carbon tax rates were doubled to 0.96 NOK per liter/standard cubic meter of mineral oil and natural gas. As of 2016, the rate increased to 1,02 NOK. The Norwegian Ministry of the Environment described CO2 taxes as the most important tool for reducing emissions. See also Carbon footprint Climate change in Norway Energy policy European Economic Area Future energy development Natural resources of the Arctic Oil megaprojects (2011) Peak oil Renewable energy in Norway References Further reading International Energy Agency (2005). Energy Policies of IEA Countries – Norway- 2005 Review. Paris: OECD/IEA. ISBN 92-64-10935-8. Archived from the original on 2010-06-15. Retrieved 2010-10-11. External links Interactive Map over the Norwegian Continental Shelf, live information, facts, pictures and videos. Energy efficiency policies and measures in Norway 2006 Oil and gas in the Barents Sea – A perspective from Norway CICERO: A green certificate market may result in less green electricity Lofty Pledge to Cut Emissions Comes With Caveat in Norway CO2STORE research project Map of Norway's offshore oil and gas infrastructure [2]
dow chemical company
The Dow Chemical Company is an American multinational corporation headquartered in Midland, Michigan, United States. The company is among the three largest chemical producers in the world. It is the operating subsidiary of Dow Inc., a publicly traded holding company incorporated under Delaware law.With a presence in around 160 countries, it employs about 37,800 people worldwide. Dow has been called the "chemical companies' chemical company", as its sales are to other industries rather than directly to end-use consumers. Dow is a member of the American Chemistry Council.In 2015, Dow and fellow chemical company DuPont agreed to a corporate reorganization involving the merger of Dow and DuPont followed by a separation into three different entities. The plan commenced in 2017, when Dow and DuPont merged to form DowDuPont, and was finalized in April 2019, when the materials science division was spun off from DowDuPont and took the name of the Dow Chemical Company. History Early history Dow was founded in 1897 by chemist Herbert Henry Dow, who invented a new method of extracting the bromine that was trapped underground in brine at Midland, Michigan. The company originally sold only bleach and potassium bromide, achieving a bleach output of 72 tons a day in 1902. Early in the company's history, a group of British manufacturers tried to drive Dow out of the bleach business by cutting prices. Dow survived by also cutting its prices and, although losing about $90,000 in income, began to diversify its product line.In 1905, German bromide producers began dumping bromides at low cost in the U.S. in an effort to prevent Dow from expanding its sales of bromides in Europe. Instead of competing directly for market share with the German producers, Dow bought the cheap German-made bromides and shipped them back to Europe. This undercut his German competitors. Even in its early history, Dow set a tradition of rapidly diversifying its product line. Within twenty years, Dow had become a major producer of agricultural chemicals, elemental chlorine, phenol and other dyestuffs, and magnesium metal.During World War I, Dow supplied many war materials that the United States had previously imported from Germany. Dow produced magnesium for incendiary flares, monochlorobenzene and phenol for explosives, and bromine for medicines and tear gas. By 1918, 90 percent of Dow's production was geared towards the war effort. At this time, Dow created the diamond logo that is still used by the company. After the war, Dow continued research in magnesium, and it developed refined automobile pistons that produced more speed and better fuel efficiency. The Dowmetal pistons were used heavily in racing vehicles, and the 1921 winner of the Indianapolis 500 used the Dowmetal pistons in his vehicle.In the 1930s, Dow began producing plastic resins, which would grow to become one of the corporation's major businesses. Its first plastic products were ethylcellulose, made in 1935, and polystyrene, made in 1937. Diversification and expansion From 1940 to 1941, Dow built its first plant in Freeport, Texas, to produce magnesium extracted from seawater rather than underground brine. The Freeport plant is Dow's largest site, and the largest integrated chemical manufacturing site in the country. The site grew quickly – with power, chlorine, caustic soda and ethylene also soon in production. Growth of this business made Dow a strategic company during World War II, as magnesium became important to manufacture lightweight parts for aircraft. Based on 2002–2003 data, the Freeport plants produced 27 billion lbs of product – or 21% of Dow's global production. In 1942, Dow began its foreign expansion with the formation of Dow Chemical of Canada in Sarnia, Ontario, to produce styrene for use in styrene-butadiene synthetic rubber. Also during the war, Dow and Corning began their joint venture, Dow Corning, to produce silicones for military and, later, civilian use. The Ethyl-Dow Chemical Co. plant at Kure's Beach, NC, the only plant on the East Coast producing bromine from seawater, was attacked by a German U-boat in 1942.In the post-war era, Dow began expanding outside of North America, founding its first overseas subsidiary in Japan in 1952, and in several other nations soon thereafter. Based largely on its growing plastics business, Dow opened a consumer products division, beginning with Saran wrap in 1953. Based on its growing chemicals and plastics businesses, Dow's sales exceeded $1 billion in 1964 and $2 billion in 1971. Nuclear weapons From 1951 to 1975, Dow managed the Rocky Flats Plant near Denver, Colorado. Rocky Flats was a nuclear weapons production facility that produced plutonium triggers for hydrogen bombs. Contamination from fires and radioactive waste leakage plagued the facility under Dow's management. In 1957 a fire burned plutonium dust in the facility and sent radioactive particles into the atmosphere.The Department of Energy transferred management of the facility to Rockwell International in 1975. In 1990, nearby residents filed a class action lawsuit against Dow and Rockwell for environmental contamination of the area; the case was settled in 2017 for $375 million. According to the Appellate Court, the owners of the 12,000 properties in the class-action area had not proved that their properties were damaged or they had suffered bodily injury. Vietnam War: napalm and Agent Orange The United States military used napalm bombs during the Vietnam War until 1973. Dow was one of several manufacturers who began producing the napalm B compound under government contract from 1965. After experiencing protests and negative publicity, the other suppliers discontinued manufacturing the product, leaving Dow as the sole provider. The company said that it carefully considered its position, and decided, as a matter of principle, "its first obligation was to the government". Despite a boycott of its products by anti-war groups and harassment of recruiters on some college campuses, Dow continued to manufacture napalm B until 1969. Agent Orange, a chemical defoliant containing dioxin, was also manufactured by Dow in New Plymouth, New Zealand, and Midland, Michigan, in the United States for use by the British military during the Malayan Emergency and the U.S. military during the Vietnam War. In 2005, a lawsuit was filed by Vietnamese victims of Agent Orange against Dow and Monsanto Co., which also supplied Agent Orange to the military. The lawsuit was dismissed. In 2012, Monsanto agreed to a $93 million settlement as a result of a case pursued by ex-Monsanto employees and citizens in the city of Nitro, WV. In 1949, a chemical plant in Nitro experienced an explosion that damaged a tank containing 2,4,5-T, one of the composites that is used in the production of Agent Orange. The settlement of the case included $9 million for the cleanup of affected homes in the area, and $84 million to cover the medical monitoring and treatment of people affected by the explosion, as well as legal costs for the claimants. No care has been given for the in state damage done by the Headquarters in Midland, Michigan, and they refuse to give the evidence to the community. Dow Corning breast implants A major manufacturer of silicone breast implants, Dow Corning (Dow Chemical's Joint Venture with Corning Inc.) was sued for personal damages caused by ruptured implants. On 6 October 2005, all such cases pending in the District Court against the company were dismissed. A number of large, independent reviews of the scientific literature, including the Institute of Medicine in the United States, have subsequently found that silicone breast implants do not cause breast cancers or any identifiable systemic disease. Bhopal disaster The Bhopal disaster occurred at a pesticide plant owned by Union Carbide India Ltd., a subsidiary of Union Carbide, in 1984. A gas cloud containing methyl isocyanate and other chemicals spread to the neighborhoods near the plant where more than half a million people lived. The government of Madhya Pradesh confirmed 3,787 deaths related to the gas release. The leak caused 558,125 injuries, including 38,478 temporary partial injuries and approximately 3,900 severely and permanently disabling injuries. Union Carbide was sued by the Government of India and agreed to an out-of-court settlement of US$470 million in 1989. Dow Chemical acquired Union Carbide in 2001. Activists want Dow Chemical to clean up the site which is now controlled by the state of Madhya Pradesh. DBCP Until the late 1970s, Dow produced DBCP (1,2-dibromo-3-chloropropane), a soil fumigant, and nematicide, sold under the names the Nemagon and Fumazone. Plantation workers who alleged that they became sterile or were stricken with other maladies subsequently sued both Dow and Dole Foods in Latin American courts. The cases were marred by extensive fraud, including the falsification of test results and the recruitment of plaintiffs who had never worked at Dole plantations. While Nicaraguan courts awarded the plaintiffs over $600 million in damages, they have been unable to collect any payment from the companies. A group of plaintiffs then sued in the United States, and, on 5 November 2007, a Los Angeles jury awarded them $3.2 million. Dole and Dow vowed to appeal the decision. On 23 April 2009 a Los Angeles judge threw out two cases against Dole and Dow due to fraud and extortion by lawyers in Nicaragua recruiting fraudulent plaintiffs to make claims against the company. The ruling casts doubt on $2 billion in judgments in similar lawsuits. Tax evasion In February 2013 a federal court rejected two tax shelter transactions entered into by Dow that created approximately $1 billion in tax deductions between 1993 and 2003. The court wrote that the transactions were "schemes that were designed to exploit perceived weaknesses in the tax code and not designed for legitimate business reasons". The schemes were created by Goldman Sachs and the law firm of King & Spalding, and involved creating a partnership that Dow operated out of its European headquarters in Switzerland. Dow stated that it had paid all tax assessments with interest. The case was against the Internal Revenue Service seeking a refund of the taxes paid. The case was appealed to the 5th Circuit court, where Dow's claims were again rejected. Dow has petitioned for an en banc hearing by the 5th Circuit, arguing that the decision was contrary to established case law. Price fixing Dow Chemical was implicated in a price-fixing scheme that inflated the cost of polyurethane for customers. The U.S. Justice Department closed an investigation in 2007, but a class-action lawsuit won at a jury trial in 2013. Dow settled the suit in 2016 for $835 million. Recent mergers, acquisitions and reorganization 1990s – transition from geographic alignment to global business units In the early 1990s, Dow embarked on a major structural reorganization. The former reporting hierarchy was geographically based, with the regional president reporting directly to the overall company president and CEO. The new organization combines the same businesses from different sites, irrespective of which region they belong (i.e. the vice president for Polystyrene is now in charge of these plants all over the world). Union Carbide merger At the beginning of August 1999, Dow agreed to purchase Union Carbide Corp. (UCC) for $9.3 billion in stock. At the time, the combined company was the second largest chemical company, behind DuPont. This led to protests from some stockholders, who feared that Dow did not disclose potential liabilities related to the Bhopal disaster.William S. Stavropoulos served as president and chief executive officer of Dow from 1995 to 2000, then again from 2002 to 2004. He relinquished his board seat on 1 April 2006, having been a director since 1990 and chairman since 2000. During his first tenure, he led the purchase of UCC, which proved controversial, as it was blamed for poor results under his successor as chief executive officer, Mike Parker. Parker was dismissed and Stavropoulos returned from retirement to lead Dow. 2006–2008 restructuring On 31 August 2006, Dow announced that it planned to close facilities at five locations: Sarnia, Ontario was Dow's first manufacturing site in Canada, located in the Chemical Valley area alongside other petrochemical companies. In 1942, the Canadian government invited Dow to build a plant there to produce styrene (an essential raw material used to make synthetic rubber for World War II). Dow then built a polystyrene plant in 1947. In August 1985, the site accidentally discharged 11,000 litres of perchloroethylene (a carcinogenic dry cleaning chemical) into the St. Clair River, which gained infamy in the media as "The Blob", and Dow Canada was charged by the Ministry of the Environment. Up to the early 1990s, Dow Canada's headquarters was located at the Modeland Centre, and a new three-story complex called the River Centre was opened up on the Sarnia site in 1993 to house Research and Development. Since then, several plants (Dow terminology for a production unit) on the site have been dismantled, particularly the Basic Chemicals including Chlor Alkali unit whose closure was announced in 1991 and carried out in 1994 which affected nearly half of the site's employees. The Dow Canada headquarters were moved to Calgary, Alberta in 1996, and the Modeland Centre was sold to Lambton County and the City of Sarnia with Dow leasing some office space. The Dow Fitness Centre was donated to the YMCA of Sarnia-Lambton in 2003. The Sarnia Site's workforce declined from a peak of 1600 personnel in the early 1990s to about 400 by 2002. In the late 1990s, land on the site was sold to TransAlta which built a natural gas power plant that begun operations in 2002 to supply electricity to the remaining Sarnia site plants and facilities, so that Dow could close its older less efficient steam plant (originally coal fired and later burning natural gas). On 31 August 2006, Dow announced that the entire Sarnia site would cease operations at the end of 2008. The Sarnia site had been supplied with ethylene through a pipeline from western Canada but BP officials warned Dow that shipments from the pipeline had to be suspended for safety reasons, and the loss of an affordable supply for the low density polyethylene plant rendered all the other operations at the site non-competitive. The Low-Density Polyethylene and Polystyrene units closed in 2006, followed by the Latex Unit in 2008, and finally the Propylene Oxide Derivatives Unit in April 2009. Dow afterward focused its efforts on the environmental remediation of the vacant site, which was sold to TransAlta. The former site has since been renamed the Bluewater Energy Park, with the River Centre remaining available for lease. One plant at its site in Barry (South Wales), a triple string STR styrene polymer production unit. Integral in the company's development of the super high melt foam specific polymers & Styron A-Tech high gloss, high impact polymers. One plant at its site in Porto Marghera (Venice), Italy. Two plants at its site in Fort Saskatchewan, Alberta, Canada.On 2 November 2006, Dow and Izolan, the leading Russian producer of polyurethane systems, formed the joint venture Dow-Izolan and built a manufacturing facility in the city of Vladimir. Also in 2006, Dow formed the Business Process Service Center (BPSC). In December 2007, Dow announced a series of moves to revamp the company. A 4 December announcement revealed that Dow planned to exit the automotive sealers business in 2008 or 2009. Within several weeks, Dow also announced the formation of a joint venture, later named K-Dow, with Petrochemical Industries Co. (PIC), a subsidiary of Kuwait Petroleum Corporation. In exchange for $9.5 billion, the agreement included Dow selling 50-percent of its interest in five global businesses: polyethylene, polypropylene and polycarbonate plastics, and ethylenamines and ethanolamines. The agreement was terminated by PIC on 28 December 2008. Rohm & Haas Co. purchase On 10 July 2008, Dow agreed to purchase all of the common equity interest of Rohm and Haas Co. for $15.4 billion, which equated to $78 per share. The buyout was financed with equity investments of $3 billion by Berkshire Hathaway Inc. and $1 billion by the Kuwait Investment Authority. The purpose of the deal was to move Dow Chemical further into specialty chemicals, which offer higher profit margins than the commodities market and are more difficult to enter for the competition. The purchase was criticized by many on Wall Street who believed Dow Chemical overpaid (about a 75 percent premium on the previous day's market capital) to acquire the company; however, the high bid was needed to ward off competing bids from BASF. The transaction to purchase the outstanding interest of Rohm and Haas was closed on 1 April 2009. Accelerated implementation On 8 December 2008, Dow announced that due to the Financial crisis of 2007–2008, it would accelerate job cuts resulting from its reorganization. The announced plan included closing 20 facilities, temporarily idling 180 plants, and eliminating 5,000 full-time jobs (about 11 percent of its work-force) and 6,000 contractor positions. Strategy interruption Citing the global recession that began in the latter half of 2008, the Kuwaiti government scuttled the K-Dow partnership on 28 December 2008. The collapse of the deal dealt a blow to Dow CEO Andrew Liveris' vision of restructuring the company to make it less cyclical. However, on 6 January 2009 Dow Chemical announced they were in talks with other parties who could be interested in a major joint venture with the company. Dow also announced they that it would be seeking to recover damages related to the failed joint venture from PIC.After the K-Dow deal collapsed, some speculated that the company would not complete the Rohm & Haas transaction, as the cash from the former transaction was expected to fund the latter. The deal was expected to be finalized in early 2009 and was to form one of the nation's largest specialty chemicals firms in the U.S. However, on 26 January 2009 the company informed Rohm and Haas that it would be unable to complete the transaction by the agreed upon deadline. Dow cited a deteriorated credit market and the collapse of the K-Dow Petrochemical deal as reasons for failing to timely close the merger. Around the same time, CEO Andrew Liveris said a first- time cut to the company's 97- year- old dividend policy was not "off the table". On 12 February 2009, the company declared a quarterly dividend of $0.15/share, down from $0.42 the previous quarter. The cut represented the first time the company had diminished its investor payout in the dividend's 97-year history.The transaction to purchase the outstanding interest of Rohm and Haas closed on 1 April 2009. After negotiating the sale of preferred stock with Rohm and Hass' two largest stockholders and extending their one-year bridge loan an additional year, the company purchased Rohm and Haas for $15 billion ($78 a share) on 9 March 2009. 2007 dismissal of senior executives On 12 April 2007, Dow dismissed two senior executives for "unauthorized discussions with third parties about the potential sale of the company". The two figures are executive vice president Romeo Kreinberg, and director and former CFO J. Pedro Reinhard. Dow claims they were secretly in contact with JPMorgan Chase; at the same time, a story surfaced in Britain's Sunday Express regarding a possible leveraged buyout of Dow. The two executives have since filed lawsuits claiming they were fired for being a threat to CEO Liveris, and that the allegations were concocted as a pretext. However, in June 2008 Dow Chemical and the litigants announced a settlement in which Kreinberg and Reinhard dropped their lawsuits and admitted taking part in discussions "which were not authorized by, nor disclosed to, Dow's board concerning a potential LBO" and acknowledged that it would have been appropriate to have informed the CEO and board of the talks. 2008 sale of zoxamide business In summer 2008, Dow sold its zoxamide business to Gowan Company. Included in the sale were the trademarks for a potato and grape fungicide called Gavel (fungicide). It is employed by potato growers to control early and late potato blight and to suppress tuber blight, and is also registered in Canada for control of downy mildew in grapes, except in British Columbia. 2014 – New operating segments In the fourth quarter of 2014, Dow announced new operating segments in response to its previously announced leadership changes. The company stated it would give further support to its end-market orientation and increase its alignment to Dow's key value chains – ethylene and propylene. U.S. Gulf Coast investments Several plants on the Gulf Coast of the US have been in development since 2013, as part of Dow's transition away from naphtha. Dow estimates the facilities will employ about 3000 people, and 5000 people during construction. The plants will manufacture materials for several of its growing segments, including hygiene and medical, transportation, electrical and telecommunications, packaging, consumer durables and sports and leisure.Dow's new propane dehydrogenation (PDH) facility in Freeport, Texas, was expected to come online in 2015, with a first 750000 tonne per year unit, while other units would become available in the future. An ethylene production facility was expected to start up in the first half of 2017. Chlorine merger On 27 March 2015, Dow and Olin Corporation announced that the boards of directors of both companies unanimously approved a definitive agreement under which Dow will separate a significant portion of its chlorine business and merge that new entity with Olin in a transaction that will create an industry leader, with revenues approaching $7 billion. Olin, the new partnership, became the largest chlorine producer in the world. 2015 merger and 2019 separation with DuPont On 11 December 2015, Dow announced that it would merge with DuPont, in an all-stock deal. The combined company, which was known as DowDuPont, had an estimated value of $130 billion, was equally held by the shareholders of both companies, and maintained their respective headquarters in Michigan and Delaware. Within two years of the merger's closure, DowDuPont was set to split into three separate public companies, focusing on the agriculture, chemical, and specialty product industries. Shareholders of each company held 50% of the combined company. In the new entity, Dow Chemical chief executive officer Andrew N. Liveris became executive chairman and DuPont chief executive officer Edward D. Breen became chief executive officer. In January 2017, the merger was pushed back a second time pending regulatory approvals.The same day, Dow also announced that it had reached a deal to acquire Corning Incorporated's stake in their joint venture Dow Corning for $4.8 billion in cash and a roughly 40% stake in Hemlock Semiconductor Corporation.In 2019, DowDuPont de-merged, forming Dow Inc. The spin-off was completed on 1 April 2019, at which time Dow Inc. became an independent, publicly traded company, and the direct parent company of The Dow Chemical Company. Also in 2019 Dow employees won an Adhesives and Sealants Council Innovation Award for "UV Curable Primer that Enables Hard to Bond INFUSE Olefin Block Copolymer Midsole Foams in High Performance Footwear". Focus on higher margin business Dow Chemical has begun to shed commodity chemical businesses, such as those making the basic ingredients for grocery bags and plastic pipes, because their profit margins only average 5–10%. Dow is, as of 2015, focusing its resources on specialty chemicals that earn profit margins of at least 20%. Dioxin contamination Areas along Michigan's Tittabawassee River, which runs within yards of Dow's main plant in Midland, were found to contain elevated levels of the cancer-causing chemical dioxin in November 2006. The dioxin was located in sediments two to ten feet below the surface of the river, and, according to The New York Times, "there is no indication that residents or workers in the area are directly exposed to the sites". However, people who often eat fish from the river had slightly elevated levels of dioxin in their blood. In July 2007, Dow reached an agreement with the Environmental Protection Agency to remove 50,000 cubic yards (38,000 m3) of sediment from three areas of the riverbed and levees of the river that had been found to be contaminated. In November 2008, Dow Chemical along with the United States Environmental Protection Agency and Michigan Department of Environmental Quality agreed to establish a Superfund to address dioxin cleanup of the Tittabawassee River, Saginaw River and Saginaw Bay. Sale of herbicide business In December 2015, Dow Chemicals agreed to sell part of its global herbicide business, which had reported falling sales for nearly a year. A portfolio of weed killers known as dinitroanilines was sold to privately held Gowan Company, a family owned company located in Yuma, Arizona, which markets a variety of pesticides to the agricultural and horticultural industries. The global trademarks for Treflan (pesticide)®, which can be sprayed on field corn, cotton and some fruit and vegetables, were included in the sale, as well as a formulation and packaging facility in Fort Saskatchewan, Alberta, Canada. Edge (pesticide)®, Team (pesticide)®, Bonalan (pesticide)® and Sonalan (pesticide)®, intellectual property and labels for herbicides based on the molecules trifluralin, benfluralin and ethalfluralin were also included in the sale. Annual grasses and small seeded broadleaf weeds can be controlled with these products in a wide range of crops including cotton, beans, canola, cereals, crucifers, cucurbits, and vegetables. Dinitroanilines, are also known as "DNA herbicides", and have been commercialised at least since 1970. 2020 evacuation In May 2020, Dow Chemical and many other areas in Midland County, Michigan were forced to evacuate due to high flooding which was caused by the breach of the Edenville and Sanford dams following two days of heavy rainfall in the area. Products Dow is a large producer of plastics, including polystyrene, polyurethane, polyethylene, polypropylene, and synthetic rubber. It is also a major producer of ethylene oxide, various acrylates, surfactants, and cellulose resins. It produces agricultural chemicals including the pesticide Lorsban and consumer products including Styrofoam. Some Dow consumer products, including Saran wrap, Ziploc bags, and Scrubbing Bubbles were sold to S. C. Johnson & Son in 1997. Performance plastics Performance plastics make up 25% of Dow's sales, with many products designed for the automotive and construction industries. The plastics include polyolefins such as polyethylene and polypropylene, as well as polystyrene used to produce Styrofoam insulating material. Dow manufactures epoxy resin intermediates including bisphenol A and epichlorohydrin. Saran resins and films are based on polyvinylidene chloride (PVDC). Performance chemicals The Performance Chemicals (17 percent of sales) segment produces chemicals and materials for water purification, pharmaceuticals, paper coatings, paints and advanced electronics. Major product lines include nitroparaffins, such as nitromethane, used in the pharmaceutical industry and manufactured by Angus Chemical Company, a wholly owned subsidiary of The Dow Chemical Co. Important polymers include Dowex ion-exchange resins, acrylic and polystyrene latex, as well as Carbowax polyethylene glycols. Specialty chemicals are used as starting materials for production of agrochemicals and pharmaceuticals. Water purification Dow Water and Process Solutions (DW&PS) is a business unit which manufactures Filmtec reverse osmosis membranes which are used to purify water for human use in the Middle East. The technology was used during the 2000 Summer Olympics and 2008 Summer Olympics. The DW&PS business unit remained with DowDuPont following the April 2019 spin-off. Agricultural sciences Agricultural Sciences, or (Dow AgroSciences), provides 7 percent of sales and is responsible for a range of insecticides (such as Lorsban), herbicides and fungicides. Seeds from genetically modified plants are also an important area of growth for the company. Dow AgroSciences sells seeds commercially under the following brands: Mycogen (grain corn, silage corn, sunflowers, alfalfa, and sorghum), Atlas (soybean), PhytoGen (cotton) and Hyland Seeds in Canada (corn, soybean, alfalfa, navy beans and wheat). The Dow AgroSciences business unit was spun off into Corteva Inc, on 3 June 2019. Basic plastics Basic plastics (26 percent of sales) end up in everything from diaper liners to beverage bottles and oil tanks. Products are based on the three major polyolefins – polystyrene (such as Styron resins), polyethylene and polypropylene. Basic chemicals Basic chemicals (12 percent of sales) are used internally by Dow as raw materials and are also sold worldwide. Markets include dry cleaning, paints and coatings, snow and ice control and the food industry. Major products include ethylene glycol, caustic soda, chlorine, and vinyl chloride monomer (VCM, for making PVC). Ethylene oxide and propylene oxide and the derived alcohols ethylene glycol and propylene glycol are major feedstocks for the manufacture of plastics such as polyurethane and PET. Hydrocarbons and energy The Hydrocarbons and Energy operating segment (13 percent of sales) oversees energy management at Dow. Fuels and oil-based raw materials are also procured. Major feedstocks for Dow are provided by this group, including ethylene, propylene, 1,3-butadiene, benzene and styrene. Hand sanitizer In March 2020, during the Coronavirus outbreak, Dow expanded its European hand sanitizer production, providing the product free to hospitals. Finances For the fiscal year 2017, Dow Chemicals reported earnings of US$1.5 billion, with an annual revenue of US$62.5 billion, an increase of 29.8% over the previous fiscal cycle. Dow Chemicals shares traded at over $67 per share, and its market capitalization was valued at over US$121.1 billion in September 2018. Environmental record In 2003, Dow agreed to pay $2 million, the largest penalty ever in a pesticide case, to the state of New York for making illegal safety claims related to its pesticides. The New York Attorney General's Office stated that Dow AgroSciences had violated a 1994 agreement with the State of New York to stop advertisements making safety claims about its pesticide products. Dow stated that it was not admitting to any wrongdoing, and that it was agreeing to the settlement to avoid a costly court battle.According to the United States Environmental Protection Agency (EPA), Dow has some responsibility for 96 of the United States' Superfund toxic waste sites, placing it in 10th place by number of sites. One of these, a former UCC uranium and vanadium processing facility near Uravan, Colorado, is listed as the sole responsibility of Dow. The rest are shared with numerous other companies. Fifteen sites have been listed by the EPA as finalized (cleaned up) and 69 are listed as "construction complete", meaning that all required plans and equipment for cleanup are in place.In 2007, the chemical industry trade association – the American Chemical Council – gave Dow an award of 'Exceptional Merit' in recognition of longstanding energy efficiency and conservation efforts. Between 1995 and 2005, Dow reduced energy intensity (BTU per pound produced) by 22 percent. This is equivalent to saving enough electricity to power eight million US homes for a year. The same year, Dow subsidiary, Dow Agrosciences, won a United Nations Montreal Protocol Innovators Award for its efforts in helping replace methyl bromide – a compound identified as contributing to the depletion of the ozone layer. In addition, Dow Agrosciences won an EPA "Best of the Best" Stratospheric Ozone Protection Award. The United States Environmental Protection Agency (EPA) named Dow as a 2008 Energy Star Partner of the Year for excellence in energy management and reductions in greenhouse gas emissions. Carbon footprint Dow Chemical Company reported Total CO2e emissions (Direct + Indirect) for the twelve months ending 31 December 2020 at 33,100 Kt (+700/+2.2% y-o-y) and plans to reduce emissions 15% by 2030 from a 2019 base year. Board of directors The final board of directors of The Dow Chemical Co. were, prior to the closing of the merger with DuPont on 1 September 2017: Ajay Banga – president and CEO, MasterCard Jacqueline Barton – chemistry professor, California Institute of Technology James A. Bell – former president and CFO, Boeing Richard K. Davis - chairman of the board and chief executive officer of U.S. Bancorp Jeff Fettig – chairman and CEO, Whirlpool Corp. Jim Fitterling - chairman and CEO, Dow Inc. Andrew N. Liveris – former chairman and CEO, The Dow Chemical Co. Mark Loughridge - former chief financial officer, IBM Raymond J. Milchovich - lead director of Nucor and former chairman and CEO of Foster Wheeler AG Robert S. (Steve) Miller - International Automotive Components (IAC) Group Paul Polman – CEO Unilever PLC and Unilever Dennis H. Reilley – former chairman Covidien Ltd. James Ringler – vice chairman, Illinois Tool Works Inc. Ruth G. Shaw – former president and CEO, Duke Energy Corp.The ten members of the board of directors of today's iteration of Dow are: Samuel R. Allen – chairman and former CEO, Deere & Company Ajay Banga – president & CEO MasterCard Jacqueline Barton – chemistry professor, California Institute of Technology James A. Bell – former president and CFO Boeing Wesley G. Bush – chairman, Northrop Grumman Richard K. Davis – chairman and CEO of U.S. Bancorp; Make-A-Wish chairman Jeff Fettig – former chairman and CEO, Whirlpool Corp. Jim Fitterling – Dow Inc. chairman and CEO Jacqueline Hinman – former chairman, president and CEO of CH2M Hill Jill S. Wyant – EVP and president of global regions, Ecolab, Inc. Daniel W. Yohannes – former U.S. Ambassador to the Organisation for Economic Co-operation and Development Major sponsorships In July 2010, Dow became a worldwide partner of the Olympic Games. The sponsorship extended until 2020.In September 2004, Dow obtained the naming rights to the Saginaw County Event Center in Saginaw, Michigan; the center is now called the Dow Event Center. The Saginaw Spirit (of the Ontario Hockey League) plays at the center, which also hosts events such as professional wrestling, live theater, and concerts.In October 2006, Dow bought the naming rights to the stadium used by the Great Lakes Loons, a Single-A minor league baseball team located in its hometown of Midland, Michigan. The stadium is called Dow Diamond. The Dow Foundation played a key role in bringing the Loons to the city. In 2010, Dow signed a $100m (£63m) 10-year deal with the International Olympic Committee and agreed to sponsor the £7m Olympic Stadium wrap.Since 2014 Dow also sponsors Austin Dillon's #3 Chevrolet for Richard Childress Racing in the NASCAR Cup Series. Major collaborations Lab Safety Academy On 20 May 2013, Dow launched the Dow Lab Safety Academy, a website that includes a large collection of informational videos and resources that demonstrate best practices in laboratory safety. The goal of the website is to improve awareness of safety practices in academic research laboratories and to help the future chemical workforce develop a safety mindset. As such, the Dow Lab Safety Academy is primarily geared toward university students. However, Dow has made the content open to all, including those already employed in the chemical industry. The Dow Lab Safety Academy is also available through the Safety and Chemical Engineering Education program, an affiliate of American Institute of Chemical Engineers (AIChE); and The Campbell Institute, an organization focusing on environment, health and safety practices. The Dow Lab Safety Academy is one component of Dow's larger laboratory safety initiative launched in early 2012, following a report from the U.S. Chemical Safety Board that highlighted the potential hazards associated with conducting research at chemical laboratories in academic institutions. Seeking to share industry best practices with academia, Dow partnered with several U.S. research universities to improve safety awareness and practices in the departments of chemistry, chemical engineering, engineering and materials. Through the pilot programs with U.C. Santa Barbara (UCSB), University of Minnesota, and Pennsylvania State University, Dow worked with graduate students and faculty to identify areas of improvement and develop a culture of laboratory safety. Nature conservancy In January 2011, The Nature Conservancy and The Dow Chemical Co. announced a collaboration to integrate the value of nature into business decision-making. Scientists, engineers, and economists from The Nature Conservancy and Dow are working together at three pilot sites (North America, Latin America, and TBD) to implement and refine models that support corporate decision-making related to the value and resources nature provides. Those ecosystem services include water, land, air, oceans and a variety of plant and animal life. These sites will serve as a “living laboratories”, to validate and test methods and models so they can be used to inform more sustainable business decisions at Dow and hopefully influence the decision-making and business practices of other companies. Part-owned companies Companies part-owned by Dow include: EQUATE Petrochemical Co. K.S.C.C. The Kuwait Olefins Company K.S.C.C. The Kuwait Styrene Company K.S.C.C. Map Ta Phut Olefins Company Limited SCG-DOW Group Sadara Chemical Company Notable employees George Becker, former vice president of the AFL–CIO, and president of the United Steelworkers; worked at a Dow's aluminum rolling mill in Madison, Illinois, where he was a shop steward. Buddy Burris, professional football player with the Green Bay Packers; worked for Dow following his football career. Norman F. Carnahan, chemical engineer; worked at Dow's Plaquemines Parish, Louisiana division from 1965 to 1968. Sven Trygve Falck, Norwegian engineer, businessperson and politician; Dow engineer in Texas from 1967 to 1970. Larry Garner, Louisiana blues musician; worked at Dow's Baton Rouge, Louisiana facility. Bettye Washington Greene, first African-American female chemist employed at Dow; began working in 1965 at the E.C. Britton Lab. Alexandre Hohagen, vice president for Latin America and US Hispanics at Facebook; former public relations manager for Dow Chemical Brazil. Zdravko Ježić, Olympic silver medalist; worked for Dow in Texas on the development of urethane and oxide polymers. Claude-André Lachance, youngest person elected to the House of Commons of Canada (prior to 2011); director of public affairs for Dow Canada. Ray McIntire, inventor of Styrofoam; began working for Dow in 1940 and became a research director. Fred McLafferty, chemist who pioneered the technique of gas chromatography-mass spectrometry; began working at Dow's organic chemistry research laboratory in Midland, Michigan in the 1950s. John Moolenaar, member of the Michigan Senate and Michigan House of Representatives; worked as a chemist for Dow. George Andrew Olah, recipient of 1994 Nobel Prize in Chemistry; employed at Dow's Sarnia, Canada, plant in the late 1950s. Joseph Overton, political scientist who developed the Overton window concept; worked for Dow as an electrical engineer, quality specialist, and project manager. Forrest Parry, inventor of the magnetic stripe card; worked for Dow in the 1950s. Roy A. Periana, American organometallic chemist; worked for Dow at Midland, Michigan. Abu Ammaar Yasir Qadhi, conservative American Islamic cleric; worked for Dow after obtaining a chemical engineering degree from the University of Houston. Abraham Quintanilla Jr., singer-songwriter; former shipping clerk at Dow's Freeport, Texas facility. Sheldon Roberts, semiconductor pioneer who helped found Silicon Valley; former technical researcher at Dow. Alexander Shulgin, chemist and pharmacologist credited with introducing the drug MDMA ("ecstasy") to psychologists in the late 1970s; worked for Dow in the 1960s, where he invented Zectran, the first biodegradable insecticide. Mary P. Sinclair, environmental activist; former technical researcher at Dow. Huimin Zhao, Centennial Endowed Chair of Chemical and Bio-Molecular Engineering at the University of Illinois Urbana-Champaign; project leader at Dow's Industrial Biotechnology Laboratory. See also BASF Union Carbide DuPont References Further reading Ray H. Boundy, J. Lawrence Amos. (1990). A History of the Dow Chemical Physics Lab: The Freedom to be Creative. M. Dekker. ISBN 0-8247-8097-3. E. Ned Brandt. (2003). Growth Company: Dow Chemical's First Century. Michigan State University Press. ISBN 0-87013-426-4 online book review Don Whitehead and Max Dendermonde. (1968). The Dow Story: The History of the Dow Chemical Co. McGraw-Hill. ISBN 90-800099-9-7. External links Official website Business data for Dow Inc.: Dow Chemical Company Historical Image Collection Science History Institute Digital Collections (An extensive collection of photographs and slides depicting the facilities, operations, and products of The Dow Chemical Company, primarily dating from the second half of the 20th century). Advertisements from the Dow Chemical Historical Collection Science History Institute Digital Collections (An extensive collection of domestic print advertisements, leaflets, posters, and other ephemera for various brands of The Dow Chemical Company, primarily taken from magazines published between 1921 and 1993).
sustainable transport
Sustainable transport refers to ways of transportation that are sustainable in terms of their social and environmental impacts. Components for evaluating sustainability include the particular vehicles used for road, water or air transport; the source of energy; and the infrastructure used to accommodate the transport (roads, railways, airways, waterways, canals and terminals). Transport operations and logistics as well as transit-oriented development are also involved in evaluation. Transportation sustainability is largely being measured by transportation system effectiveness and efficiency as well as the environmental and climate impacts of the system. Transport systems have significant impacts on the environment, accounting for between 20% and 25% of world energy consumption and carbon dioxide emissions. The majority of the emissions, almost 97%, came from direct burning of fossil fuels. In 2019, about 95% of the fuel came from fossil sources. The main source of greenhouse gas emissions in the European Union is transportation. In 2019 it contributes to about 31% of global emissions and 24% of emissions in the EU. In addition, up to the COVID-19 pandemic, emissions have only increased in this one sector. Greenhouse gas emissions from transport are increasing at a faster rate than any other energy using sector. Road transport is also a major contributor to local air pollution and smog.Sustainable transport systems make a positive contribution to the environmental, social and economic sustainability of the communities they serve. Transport systems exist to provide social and economic connections, and people quickly take up the opportunities offered by increased mobility, with poor households benefiting greatly from low carbon transport options. The advantages of increased mobility need to be weighed against the environmental, social and economic costs that transport systems pose. Short-term activity often promotes incremental improvement in fuel efficiency and vehicle emissions controls while long-term goals include migrating transportation from fossil-based energy to other alternatives such as renewable energy and use of other renewable resources. The entire life cycle of transport systems is subject to sustainability measurement and optimization.The United Nations Environment Programme (UNEP) estimates that each year 2.4 million premature deaths from outdoor air pollution could be avoided. Particularly hazardous for health are emissions of black carbon, a component of particulate matter, which is a known cause of respiratory and carcinogenic diseases and a significant contributor to global climate change. The links between greenhouse gas emissions and particulate matter make low carbon transport an increasingly sustainable investment at local level—both by reducing emission levels and thus mitigating climate change; and by improving public health through better air quality. The term "green mobility" also refers to clean ways of movement or sustainable transport. The social costs of transport include road crashes, air pollution, physical inactivity, time taken away from the family while commuting and vulnerability to fuel price increases. Many of these negative impacts fall disproportionately on those social groups who are also least likely to own and drive cars. Traffic congestion imposes economic costs by wasting people's time and by slowing the delivery of goods and services. Traditional transport planning aims to improve mobility, especially for vehicles, and may fail to adequately consider wider impacts. But the real purpose of transport is access – to work, education, goods and services, friends and family – and there are proven techniques to improve access while simultaneously reducing environmental and social impacts, and managing traffic congestion. Communities which are successfully improving the sustainability of their transport networks are doing so as part of a wider program of creating more vibrant, livable, sustainable cities. Definition The term sustainable transport came into use as a logical follow-on from sustainable development, and is used to describe modes of transport, and systems of transport planning, which are consistent with wider concerns of sustainability. There are many definitions of the sustainable transport, and of the related terms sustainable transportation and sustainable mobility. One such definition, from the European Union Council of Ministers of Transport, defines a sustainable transportation system as one that: Allows the basic access and development needs of individuals, companies and society to be met safely and in a manner consistent with human and ecosystem health, and promotes equity within and between successive generations. Is affordable, operates fairly and efficiently, offers a choice of transport mode, and supports a competitive economy, as well as balanced regional development. Limits emissions and waste within the planet's ability to absorb them, uses renewable resources at or below their rates of generation, and uses non-renewable resources at or below the rates of development of renewable substitutes, while minimizing the impact on the use of land and the generation of noise.there is a need for people to do sustainable development Sustainability extends beyond just the operating efficiency and emissions. A life-cycle assessment involves production, use and post-use considerations. A cradle-to-cradle design is more important than a focus on a single factor such as energy efficiency. Benefits Sustainable transport has many social and economic benefits that can accelerate local sustainable development. According to a series of serious reports by the Low Emission Development Strategies Global Partnership (LEDS GP), sustainable transport can help create jobs, improve commuter safety through investment in bicycle lanes, pedestrian pathways and non-pedestrian pathways, make access to employment and social opportunities more affordable and efficient. It also offers a practical opportunity to save people's time and household income as well as government budgets, making investment in sustainable transport a 'win-win' opportunity. Environmental impact Transport systems are major emitters of greenhouse gases, responsible for 23% of world energy-related GHG emissions in 2004, with about three-quarters coming from road vehicles. Data from 2011 stated that one-third of all greenhouse gases produced are due to transportation. Currently 95% of transport energy comes from petroleum. Energy is consumed in the manufacture as well as the use of vehicles, and is embodied in transport infrastructure including roads, bridges and railways. Motorized transport also releases exhaust fumes that contain particulate matter which is hazardous to human health and a contributor to climate change.The first historical attempts of evaluating the Life Cycle environmental impact of vehicle is due to Theodore Von Karman. After decades in which all the analysis has been focused on emending the Von Karman model, Dewulf and Van Langenhove have introduced a model based on the second law of thermodynamics and exergy analysis. Chester and Orwath, have developed a similar model based on the first law that accounts the necessary costs for the infrastructure. The environmental impacts of transport can be reduced by reducing the weight of vehicles, sustainable styles of driving, reducing the friction of tires, encouraging electric and hybrid vehicles, improving the walking and cycling environment in cities, and by enhancing the role of public transport, especially electric rail.Green vehicles are intended to have less environmental impact than equivalent standard vehicles, although when the environmental impact of a vehicle is assessed over the whole of its life cycle this may not be the case.Electric vehicle technology significantly reduces transport CO2 emissions when comparing battery electric vehicles (BEVs) with equivalent internal combustion engine vehicles (ICEVs). The extent to which it does this depends on the embodied energy of the vehicle and the source of the electricity. Lifecycle greenhouse gas emission reductions from BEVs are significant, even in countries with relatively high shares of coal in their electricity generation mix, such as China and India. As a specific example, a Nissan Leaf in the UK in 2019 produced one third of the greenhouse gases than the average internal combustion car. The Online Electric Vehicle (OLEV), developed by the Korea Advanced Institute of Science and Technology (KAIST), is an electric vehicle that can be charged while stationary or driving, thus removing the need to stop at a charging station. The City of Gumi in South Korea runs a 24 km roundtrip along which the bus will receive 100 kW (136 horsepower) electricity at an 85% maximum power transmission efficiency rate while maintaining a 17 cm air gap between the underbody of the vehicle and the road surface. At that power, only a few sections of the road need embedded cables. Hybrid vehicles, which use an internal combustion engine combined with an electric engine to achieve better fuel efficiency than a regular combustion engine, are already common. Natural gas is also used as a transport fuel, but is a less promising technology as it is still a fossil fuel and still has significant emissions (though lower than gasoline, diesel, etc.). Brazil met 17% of its transport fuel needs from bioethanol in 2007, but the OECD has warned that the success of (first-generation) biofuels in Brazil is due to specific local circumstances. Internationally, first-generation biofuels are forecast to have little or no impact on greenhouse emissions, at significantly higher cost than energy efficiency measures. The later generation biofuels however (2nd to 4th generation) do have significant environmental benefit, as they are no driving force for deforestation or struggle with the food vs fuel issue. In practice there is a sliding scale of green transport depending on the sustainability of the option. Green vehicles are more fuel-efficient, but only in comparison with standard vehicles, and they still contribute to traffic congestion and road crashes. Well-patronized public transport networks based on traditional diesel buses use less fuel per passenger than private vehicles, and are generally safer and use less road space than private vehicles. Green public transport vehicles including electric trains, trams and electric buses combine the advantages of green vehicles with those of sustainable transport choices. Other transport choices with very low environmental impact are cycling and other human-powered vehicles, and animal powered transport. The most common green transport choice, with the least environmental impact is walking. Transport on rails boasts an excellent efficiency (see fuel efficiency in transportation). Transport and social sustainability Cities with overbuilt roadways have experienced unintended consequences, linked to radical drops in public transport, walking, and cycling. In many cases, streets became void of "life." Stores, schools, government centers and libraries moved away from central cities, and residents who did not flee to the suburbs experienced a much reduced quality of public space and of public services. As schools were closed their mega-school replacements in outlying areas generated additional traffic; the number of cars on US roads between 7:15 and 8:15 a.m. increases 30% during the school year.Yet another impact was an increase in sedentary lifestyles, causing and complicating a national epidemic of obesity, and accompanying dramatically increased health care costs.Car-based transport systems present barriers to employment in low-income neighbourhoods, with many low-income individuals and families forced to run cars they cannot afford to maintain their income. Potential shift to sustainable transport in developing countries In developing countries such as Uganda, researchers have sought to determine factors that could possibly influence travelers to opt for bicycles as an alternative to motorcycle taxis (Bodaboda). The findings suggest that generally, the age, gender, and ability of the individual to cycle in the first place are key determinants of their willingness to shift to a more sustainable mode. Transport system improvements that could reduce the perceived risks of cycling were also seen to be the most impactful changes that could contribute towards the greater use of bicycles. Cities Cities are shaped by their transport systems. In The City in History, Lewis Mumford documented how the location and layout of cities was shaped around a walkable center, often located near a port or waterway, and with suburbs accessible by animal transport or, later, by rail or tram lines. In 1939, the New York World's Fair included a model of an imagined city, built around a car-based transport system. In this "greater and better world of tomorrow", residential, commercial and industrial areas were separated, and skyscrapers loomed over a network of urban motorways. These ideas captured the popular imagination, and are credited with influencing city planning from the 1940s to the 1970s. The popularity of the car in the post-war era led to major changes in the structure and function of cities. There was some opposition to these changes at the time. The writings of Jane Jacobs, in particular The Death and Life of Great American Cities provide a poignant reminder of what was lost in this transformation, and a record of community efforts to resist these changes. Lewis Mumford asked "is the city for cars or for people?" Donald Appleyard documented the consequences for communities of increasing car traffic in "The View from the Road" (1964) and in the UK, Mayer Hillman first published research into the impacts of traffic on child independent mobility in 1971. Despite these notes of caution, trends in car ownership, car use and fuel consumption continued steeply upward throughout the post-war period. Mainstream transport planning in Europe has, by contrast, never been based on assumptions that the private car was the best or only solution for urban mobility. For example, the Dutch Transport Structure Scheme has since the 1970s required that demand for additional vehicle capacity only be met "if the contribution to societal welfare is positive", and since 1990 has included an explicit target to halve the rate of growth in vehicle traffic. Some cities outside Europe have also consistently linked transport to sustainability and to land-use planning, notably Curitiba, Brazil, Portland, Oregon and Vancouver, Canada. There are major differences in transport energy consumption between cities; an average U.S. urban dweller uses 24 times more energy annually for private transport than a Chinese urban resident, and almost four times as much as a European urban dweller. These differences cannot be explained by wealth alone but are closely linked to the rates of walking, cycling, and public transport use and to enduring features of the city including urban density and urban design. The cities and nations that have invested most heavily in car-based transport systems are now the least environmentally sustainable, as measured by per capita fossil fuel use. The social and economic sustainability of car-based transportation engineering has also been questioned. Within the United States, residents of sprawling cities make more frequent and longer car trips, while residents of traditional urban neighborhoods make a similar number of trips, but travel shorter distances and walk, cycle and use transit more often. It has been calculated that New York residents save $19 billion each year simply by owning fewer cars and driving less than the average American. A less car intensive means of urban transport is carsharing, which is becoming popular in North America and Europe, and according to The Economist, carsharing can reduce car ownership at an estimated rate of one rental car replacing 15 owned vehicles. Car sharing has also begun in the developing world, where traffic and urban density is often worse than in developed countries. Companies like Zoom in India, eHi in China, and Carrot in Mexico, are bringing car-sharing to developing countries in an effort to reduce car-related pollution, ameliorate traffic, and expand the number of people who have access to cars.The European Commission adopted the Action Plan on urban mobility on 30 September 2009 for sustainable urban mobility. The European Commission will conduct a review of the implementation of the Action Plan in the year 2012, and will assess the need for further action. In 2007, 72% of the European population lived in urban areas, which are key to growth and employment. Cities need efficient transport systems to support their economy and the welfare of their inhabitants. Around 85% of the EU's GDP is generated in cities. Urban areas face today the challenge of making transport sustainable in environmental (CO2, air pollution, noise) and competitiveness (congestion) terms while at the same time addressing social concerns. These range from the need to respond to health problems and demographic trends, fostering economic and social cohesion to taking into account the needs of persons with reduced mobility, families and children.The C40 Cities Climate Leadership Group (C40) is a group of 94 cities around the world driving urban action that reduces greenhouse gas emissions and climate risks, while increasing the health and wellbeing of urban citizens. In October 2019, by signing the C40 Clean Air Cities Declaration, 35 mayors recognized that breathing clean air is a human right and committed to work together to form a global coalition for clean air. Papers have been written showing with satellite data that cities with subway systems produce much less greenhouse gas. Policies and governance By country United Kingdom In 2021 the Institute for Public Policy Research issued a statement saying that car use in the United Kingdom must shrink while active transport and public transport should be used more. The Department for Transport responded that they will spend 2 billion pounds on active transport, more than ever, including making England and the rest of the UK's railways greener. UK studies have shown that a modal shift to rail from air could result in a sixty fold reduction in CO2 emissions. Germany Some Western countries are making transportation more sustainable in both long-term and short-term implementations. An example is the modification in available transportation in Freiburg, Germany. The city has implemented extensive methods of public transportation, cycling, and walking, along with large areas where cars are not allowed. United States Since many Western countries are highly automobile-oriented, the main transit that people use is personal vehicles. About 80% of their travel involves cars. Therefore, California, is one of the highest greenhouse gases emitters in the United States. The federal government has to come up with some plans to reduce the total number of vehicle trips to lower greenhouse gases emission. Such as: Improve public transport through the provision of larger coverage area in order to provide more mobility and accessibility, new technology to provide a more reliable and responsive public transportation network. Encourage walking and biking through the provision of wider pedestrian pathway, bike share stations in downtowns, locate parking lots far from the shopping center, limit on street parking, slower traffic lane in downtown area. Increase the cost of car ownership and gas taxes through increased parking fees and tolls, encouraging people to drive more fuel efficient vehicles. This can produce a social equity problem, since lower income people usually drive older vehicles with lower fuel efficiency. Government can use the extra revenue collected from taxes and tolls to improve public transportation and benefit poor communities.Other states and nations have built efforts to translate knowledge in behavioral economics into evidence-based sustainable transportation policies. France In March 2022, an advertising regulation will come into force in France, requiring all advertising materials for automobiles to include one of three standard disclaimers promoting the use of sustainable transport practices. This applies to all vehicles, including electric vehicles. In 2028, it will also become illegal to advertise vehicles which emit more than 128 grams of carbon dioxide per-kilometre. At city level Sustainable transport policies have their greatest impact at the city level. Some of the biggest cities in Western Europe have a relatively sustainable transport. In Paris 53% of trips are made by walking, 3% by bicycle, 34% by public transport, and only 10% by car. In the entire Ile-de-France region, walking is the most popular way of transportation. In Amsterdam, 28% of trips are made by walking, 31% by bicycle, 18% by public transport and only 23% by car. In Copenhagen 62% of people commute to school or work by bicycle.Outside Western Europe, cities which have consistently included sustainability as a key consideration in transport and land use planning include Curitiba, Brazil; Bogota, Colombia; Portland, Oregon; and Vancouver, Canada. The state of Victoria, Australia passed legislation in 2010 – the Transport Integration Act – to compel its transport agencies to actively consider sustainability issues including climate change impacts in transport policy, planning and operations.Many other cities throughout the world have recognized the need to link sustainability and transport policies, for example by joining the Cities for Climate Protection program. Some cities are trying to become car-free cities, e.g., limit or exclude the usage of cars. In 2020, the COVID-19 pandemic pushed several cities to adopt a plan to drastically increase biking and walking; these included Milan, London, Brighton, and Dublin. These plans were taken to facilitate social distancing by avoiding public transport and at the same time prevent a rise in traffic congestion and air pollution from increase in car use. A similar plan was adopted by New York City and Paris. The pandemic's impact on urban public transportation means revenue declines will put a strain on operators' finances and may cause creditworthiness to worsen. Governments might be forced to subsidize operators with financial transfers, in turn reducing resources available for investment in greener transportation systems. Community and grassroots action Sustainable transport is fundamentally a grassroots movement, albeit one which is now recognized as of citywide, national and international significance. Whereas it started as a movement driven by environmental concerns, over these last years there has been increased emphasis on social equity and fairness issues, and in particular the need to ensure proper access and services for lower income groups and people with mobility limitations, including the fast-growing population of older citizens. Many of the people exposed to the most vehicle noise, pollution and safety risk have been those who do not own, or cannot drive cars, and those for whom the cost of car ownership causes a severe financial burden.An organization called Greenxc started in 2011 created a national awareness campaign in the United States encouraging people to carpool by ride-sharing cross country stopping over at various destinations along the way and documenting their travel through video footage, posts and photography. Ride-sharing reduces individual's carbon footprint by allowing several people to use one car instead of everyone using individual cars. At the beginning of the 21st century, some companies are trying to increase the use of sailing ships, even for commercial purposes, for example, Fairtrannsport and New Dawn Traders They have created the Sail Cargo Alliance.The European Investment Bank committed €314 million between 2018 and 2022 to green marine transport, funding the building of new ships and the retrofitting of current ships with eco-friendly technologies to increase their energy efficiency and lower harmful emissions. The Bank also offered an average of €11 billion per year from 2012 to 2022 for sustainable transportation solutions and climate-friendly initiatives. In 2022, railway projects received around 32% of overall transport loans, while urban mobility received approximately 37%. Recent trends Car travel increased steadily throughout the twentieth century, but trends since 2000 have been more complex. Oil price rises from 2003 have been linked to a decline in per capita fuel use for private vehicle travel in the US, Britain and Australia. In 2008, global oil consumption fell by 0.8% overall, with significant declines in consumption in North America, Western Europe, and parts of Asia.Other factors affecting a decline in driving, at least in America, include the retirement of Baby Boomers who now drive less, preference for other travel modes (such as transit) by younger age cohorts, the Great Recession, and the rising use of technology (internet, mobile devices) which have made travel less necessary and possibly less attractive. Greenwashing The term green transport is often used as a greenwash marketing technique for products which are not proven to make a positive contribution to environmental sustainability. Such claims can be legally challenged. For instance the Norwegian Consumer Ombudsman has targeted car manufacturers who claim that their cars are "green", "clean" or "environmentally friendly". Manufacturers risk fines if they fail to drop the words. The Australian Competition & Consumer Commission (ACCC) describes "green" claims on products as "very vague, inviting consumers to give a wide range of meanings to the claim, which risks misleading them". In 2008 the ACCC forced a car retailer to stop its green marketing of Saab cars, which was found by the Australian Federal Court to be "misleading". Tools and incentives Several European countries are opening up financial incentives that support more sustainable modes of transport. The European Cyclists' Federation, which focuses on daily cycling for transport, has created a document containing a non-complete overview. In the UK, employers have for many years been providing employees with financial incentives. The employee leases or borrows a bike that the employer has purchased. You can also get other support. The scheme is beneficial for the employee who saves money and gets an incentive to get exercise integrated in the daily routine. The employer can expect a tax deduction, lower sick leave and less pressure on parking spaces for cars. Since 2010, there has been a scheme in Iceland (Samgöngugreiðslur) where those who do not drive a car to work, get paid a lump of money monthly. An employee must sign a statement not to use a car for work more often than one day a week, or 20% of the days for a period. Some employers pay fixed amounts based on trust. Other employers reimburse the expenses for repairs on bicycles, period-tickets for public transport and the like. Since 2013, amounts up to ISK 8000 per month have been tax-free. Most major workplaces offer this, and a significant proportion of employees use the scheme. Since 2019 half the amount is tax-free if the employee signs a contract not to use a car to work for more than 40% of the days of the contract period. Possible measures for urban transport The EU Directorate-General for Transport and Energy (DG-TREN) has launched a program which focusses mostly on urban transport. Its main measures are: History Most of the tools and concepts of sustainable transport were developed before the phrase was coined. Walking, the first mode of transport, is also the most sustainable. Public transport dates back at least as far as the invention of the public bus by Blaise Pascal in 1662. The first passenger tram began operation in 1807 and the first passenger rail service in 1825. Pedal bicycles date from the 1860s. These were the only personal transport choices available to most people in Western countries prior to World War II, and remain the only options for most people in the developing world. Freight was moved by human power, animal power or rail. Mass motorization The post-war years brought increased wealth and a demand for much greater mobility for people and goods. The number of road vehicles in Britain increased fivefold between 1950 and 1979, with similar trends in other Western nations. Most affluent countries and cities invested heavily in bigger and better-designed roads and motorways, which were considered essential to underpin growth and prosperity. Transport planning became a branch of Urban Planning and identified induced demand as a pivotal change from "predict and provide" toward a sustainable approach incorporating land use planning and public transit. Public investment in transit, walking and cycling declined dramatically in the United States, Great Britain and Australia, although this did not occur to the same extent in Canada or mainland Europe.Concerns about the sustainability of this approach became widespread during the 1973 oil crisis and the 1979 energy crisis. The high cost and limited availability of fuel led to a resurgence of interest in alternatives to single occupancy vehicle travel. Transport innovations dating from this period include high-occupancy vehicle lanes, citywide carpool systems and transportation demand management. Singapore implemented congestion pricing in the late 1970s, and Curitiba began implementing its Bus Rapid Transit system in the early 1980s. Relatively low and stable oil prices during the 1980s and 1990s led to significant increases in vehicle travel from 1980 to 2000, both directly because people chose to travel by car more often and for greater distances, and indirectly because cities developed tracts of suburban housing, distant from shops and from workplaces, now referred to as urban sprawl. Trends in freight logistics, including a movement from rail and coastal shipping to road freight and a requirement for just in time deliveries, meant that freight traffic grew faster than general vehicle traffic. At the same time, the academic foundations of the "predict and provide" approach to transport were being questioned, notably by Peter Newman in a set of comparative studies of cities and their transport systems dating from the mid-1980s. The British Government's White Paper on Transport marked a change in direction for transport planning in the UK. In the introduction to the White Paper, Prime Minister Tony Blair stated that We recognise that we cannot simply build our way out of the problems we face. It would be environmentally irresponsible – and would not work. A companion document to the White Paper called "Smarter Choices" researched the potential to scale up the small and scattered sustainable transport initiatives then occurring across Britain, and concluded that the comprehensive application of these techniques could reduce peak period car travel in urban areas by over 20%.A similar study by the United States Federal Highway Administration, was also released in 2004 and also concluded that a more proactive approach to transportation demand was an important component of overall national transport strategy. Mobility transition See also Groups: EcoMobility Alliance Institute for Transportation and Development Policy International Association of Public Transport Michelin Challenge Bibendum References Bibliography Sustainability and Cities: Overcoming Automobile Dependence, Island Press, Washington DC, 1999. Newman P and Kenworthy J, ISBN 1-55963-660-2. Sustainable Transportation Networks, Edward Elgar Publishing, Cheltenham, England, 2000. Nagurney A, ISBN 1-84064-357-9 Introduction to Sustainable Transportation: Policy, Planning and Implementation, Earthscan, London, Washington DC, 2010. Schiller P Eric C. Bruun and Jeffrey R. Kenworthy, ISBN 978-1-84407-665-9. Sustainable Transport, Mobility Management and Travel Plans, Ashgate Press, Farnham, Surrey, 2012, Enoch M P. ISBN 978-0-7546-7939-4. External links Guiding Principles to Sustainable Mobility Sustainable Urban Transport Project - knowledge platform (SUTP) German Partnership for Sustainable Mobility (GPSM) Bridging the Gap: Pathways for transport in the post 2012 process Sustainable-mobility.org: the centre of resources on sustainable transport Transportation Research at IssueLab Switching Gears: Enabling Access to Sustainable Urban Mobility
san francisco climate action plan
The San Francisco Climate Action Plan is a greenhouse gas reduction initiative adopted by the City and County of San Francisco, United States in 2002. It aims to reduce the city's greenhouse gas emissions to 20% below 1990 levels by 2012. The plan was updated in 2013 to adopt an updated target of 40% below 1990 levels by 2025. Greenhouse gas emissions San Francisco's annual greenhouse gas emissions were 9.7 million tons equivalent carbon dioxide (eCO2) in 2000., which was 12.5 tons eCO2 per person. This level of emissions is lower than both the state and country in which San Francisco is located: California's annual per capita emissions of 14.2 tons eCO2 in 2000, and USA's annual per capita emissions of 20.4 CO2 in 2000. However, it is much higher than the emissions level for the world as a whole, which was 4.4 tons CO2 per person in 2003. History The Climate Action Plan is one of many initiatives adopted by state and local governments in the USA to reduce greenhouse gas emissions, enacted primarily in response due to the absence of such action at the federal level. It was approved by the San Francisco Board of Supervisors as Resolution Number 0158-02, the Greenhouse Gas Emission Reduction Resolution, on 2002-03-04. In doing so, San Francisco also joined over 500 cities to participate in the Cities for Climate Protection Campaign of the International Council for Local Environmental Initiatives. Goals San Francisco's annual greenhouse gas emissions were 9.1 million tons equivalent carbon dioxide (eCO2) in 1990 and 9.7 million tons eCO2 in 2000, and the Climate Action Plan's goal is to reduce emissions to 7.2 million tons eCO2 by 2012. These goals, therefore, represent reductions of greenhouse gas emissions of 20% from 1990 levels and 26% from 2000 levels. The sources of greenhouse gases include those generated due to fossil fuel and electricity consumption used for transportation, natural gas, and electricity used in buildings, and well as those generated by solid waste. Reports The plan's first report, Climate Action Plan for San Francisco, Local Actions to Reduce Greenhouse Gas Emissions was published by the San Francisco Department of the Environment and the San Francisco Public Utilities Commission with assistance from the International Council for Local Environmental Initiatives in September 2004. In four chapters, the report describes the causes and local impacts of climate change, the city's greenhouse gas emissions reduction target, actions to reduce those emissions, and an implementation strategy for the near term. The 2004 report proposes a wide variety of actions to achieve its stated emissions reductions, which fall into the following categories: transportation, energy efficiency, renewable energy, and solid waste. Within each category, each action is described including an estimate for the CO2 emissions reduction it would result in. The table below summarizes these. Results San Francisco met its Climate Action Plan targets with a 28.5% reduction from 1990 levels. The 2015 total was 4.4 million mtCO2e. See also Individual and political action on climate change Climate change mitigation List of countries by carbon dioxide emissions per capita San Francisco Mandatory Recycling and Composting Ordinance San Diego Climate Action Plan References Climate Action Plan for San Francisco External links Full text of the Kyoto Protocol (HTML version), (PDF version) State Climate Action Plan Fact Sheet
electricity generation
Electricity generation is the process of generating electric power from sources of primary energy. For utilities in the electric power industry, it is the stage prior to its delivery (transmission, distribution, etc.) to end users or its storage (using, for example, the pumped-storage method). Usable electricity is not freely available in nature, so it must be "produced" (that is, transforming other forms of energy to electricity). Production is carried out in power stations (also called "power plants"). Electricity is most often generated at a power plant by electromechanical generators, primarily driven by heat engines fueled by combustion or nuclear fission but also by other means such as the kinetic energy of flowing water and wind. Other energy sources include solar photovoltaics and geothermal power. There are also exotic and speculative methods to recover energy, such as proposed fusion reactor designs which aim to directly extract energy from intense magnetic fields generated by fast-moving charged particles generated by the fusion reaction (see magnetohydrodynamics). Phasing out coal-fired power stations and eventually gas-fired power stations, or, if practical, capturing their greenhouse gas emissions, is an important part of the energy transformation required to limit climate change. Vastly more solar power and wind power is forecast to be required, with electricity demand increasing strongly with further electrification of transport, homes and industry. However, in 2023, it was reported that the global electricity supply was approaching peak CO2 emissions thanks to the growth of solar and wind power. History The fundamental principles of electricity generation were discovered in the 1820s and early 1830s by British scientist Michael Faraday. His method, still used today, is for electricity to be generated by the movement of a loop of wire, or Faraday disc, between the poles of a magnet. Central power stations became economically practical with the development of alternating current (AC) power transmission, using power transformers to transmit power at high voltage and with low loss. Commercial electricity production started with the coupling of the dynamo to the hydraulic turbine. The mechanical production of electric power began the Second Industrial Revolution and made possible several inventions using electricity, with the major contributors being Thomas Alva Edison and Nikola Tesla. Previously the only way to produce electricity was by chemical reactions or using battery cells, and the only practical use of electricity was for the telegraph. Electricity generation at central power stations started in 1882, when a steam engine driving a dynamo at Pearl Street Station produced a DC current that powered public lighting on Pearl Street, New York. The new technology was quickly adopted by many cities around the world, which adapted their gas-fueled street lights to electric power. Soon after electric lights would be used in public buildings, in businesses, and to power public transport, such as trams and trains. The first power plants used water power or coal. Today a variety of energy sources are used, such as coal, nuclear, natural gas, hydroelectric, wind, and oil, as well as solar energy, tidal power, and geothermal sources. In the 1880s the popularity of electricity grew massively with the introduction of the Incandescent light bulb. Although there are 22 recognised inventors of the light bulb prior to Joseph Swan and Thomas Edison, Edison and Swan's invention became by far the most successful and popular of all. During the early years of the 19th century, massive jumps in electrical sciences were made. And by the later 19th century the advancement of electrical technology and engineering led to electricity being part of everyday life. With the introduction of many electrical inventions and their implementation into everyday life, the demand for electricity within homes grew dramatically. With this increase in demand, the potential for profit was seen by many entrepreneurs who began investing into electrical systems to eventually create the first electricity public utilities. This process in history is often described as electrification.The earliest distribution of electricity came from companies operating independently of one another. A consumer would purchase electricity from a producer, and the producer would distribute it through their own power grid. As technology improved so did the productivity and efficiency of its generation. Inventions such as the steam turbine had a massive impact on the efficiency of electrical generation but also the economics of generation as well. This conversion of heat energy into mechanical work was similar to that of steam engines, however at a significantly larger scale and far more productively. The improvements of these large-scale generation plants were critical to the process of centralised generation as they would become vital to the entire power system that we now use today. Throughout the middle of the 20th century many utilities began merging their distribution networks due to economic and efficiency benefits. Along with the invention of long-distance power transmission, the coordination of power plants began to form. This system was then secured by regional system operators to ensure stability and reliability. The electrification of homes began in Northern Europe and in the Northern America in the 1920s in large cities and urban areas. It wasn't until the 1930s that rural areas saw the large-scale establishment of electrification. Methods of generation Several fundamental methods exist to convert other forms of energy into electrical energy. Utility-scale generation is achieved by rotating electric generators or by photovoltaic systems. A small proportion of electric power distributed by utilities is provided by batteries. Other forms of electricity generation used in niche applications include the triboelectric effect, the piezoelectric effect, the thermoelectric effect, and betavoltaics. Generators Electric generators transform kinetic energy into electricity. This is the most used form for generating electricity and is based on Faraday's law. It can be seen experimentally by rotating a magnet within closed loops of conducting material (e.g. copper wire). Almost all commercial electrical generation is done using electromagnetic induction, in which mechanical energy forces a generator to rotate. Electrochemistry Electrochemistry is the direct transformation of chemical energy into electricity, as in a battery. Electrochemical electricity generation is important in portable and mobile applications. Currently, most electrochemical power comes from batteries. Primary cells, such as the common zinc–carbon batteries, act as power sources directly, but secondary cells (i.e. rechargeable batteries) are used for storage systems rather than primary generation systems. Open electrochemical systems, known as fuel cells, can be used to extract power either from natural fuels or from synthesized fuels. Osmotic power is a possibility at places where salt and fresh water merge. Photovoltaic effect The photovoltaic effect is the transformation of light into electrical energy, as in solar cells. Photovoltaic panels convert sunlight directly to DC electricity. Power inverters can then convert that to AC electricity if needed. Although sunlight is free and abundant, solar power electricity is still usually more expensive to produce than large-scale mechanically generated power due to the cost of the panels. Low-efficiency silicon solar cells have been decreasing in cost and multijunction cells with close to 30% conversion efficiency are now commercially available. Over 40% efficiency has been demonstrated in experimental systems. Until recently, photovoltaics were most commonly used in remote sites where there is no access to a commercial power grid, or as a supplemental electricity source for individual homes and businesses. Recent advances in manufacturing efficiency and photovoltaic technology, combined with subsidies driven by environmental concerns, have dramatically accelerated the deployment of solar panels. Installed capacity is growing by around 20% per year led by increases in Germany, Japan, United States, China, and India. Economics The selection of electricity production modes and their economic viability varies in accordance with demand and region. The economics vary considerably around the world, resulting in widespread residential selling prices. Hydroelectric plants, nuclear power plants, thermal power plants and renewable sources have their own pros and cons, and selection is based upon the local power requirement and the fluctuations in demand. All power grids have varying loads on them but the daily minimum is the base load, often supplied by plants which run continuously. Nuclear, coal, oil, gas and some hydro plants can supply base load. If well construction costs for natural gas are below $10 per MWh, generating electricity from natural gas is cheaper than generating power by burning coal.Nuclear power plants can produce a huge amount of power from a single unit. However, nuclear disasters have raised concerns over the safety of nuclear power, and the capital cost of nuclear plants is very high. Hydroelectric power plants are located in areas where the potential energy from falling water can be harnessed for moving turbines and the generation of power. It may not be an economically viable single source of production where the ability to store the flow of water is limited and the load varies too much during the annual production cycle. Generating equipment Electric generators were known in simple forms from the discovery of electromagnetic induction in the 1830s. In general, some form of prime mover such as an engine or the turbines described above, drives a rotating magnetic field past stationary coils of wire thereby turning mechanical energy into electricity. The only commercial scale electricity production that does not employ a generator is solar PV. Turbines Almost all commercial electrical power on Earth is generated with a turbine, driven by wind, water, steam or burning gas. The turbine drives a generator, thus transforming its mechanical energy into electrical energy by electromagnetic induction. There are many different methods of developing mechanical energy, including heat engines, hydro, wind and tidal power. Most electric generation is driven by heat engines. The combustion of fossil fuels supplies most of the energy to these engines, with a significant fraction from nuclear fission and some from renewable sources. The modern steam turbine (invented by Sir Charles Parsons in 1884) currently generates about 80% of the electric power in the world using a variety of heat sources. Turbine types include: Steam Water is boiled by coal burned in a thermal power plant. About 41% of all electricity is generated this way. Nuclear fission heat created in a nuclear reactor creates steam. Less than 15% of electricity is generated this way. Renewable energy. The steam is generated by biomass, solar thermal energy, or geothermal power. Natural gas: turbines are driven directly by gases produced by combustion. Combined cycle are driven by both steam and natural gas. They generate power by burning natural gas in a gas turbine and use residual heat to generate steam. At least 20% of the world's electricity is generated by natural gas. Water Energy is captured by a water turbine from the movement of water - from falling water, the rise and fall of tides or ocean thermal currents (see ocean thermal energy conversion). Currently, hydroelectric plants provide approximately 16% of the world's electricity. The windmill was a very early wind turbine. In 2018 around 5% of the world's electricity was produced from windTurbines can also use other heat-transfer liquids than steam. Supercritical carbon dioxide based cycles can provide higher conversion efficiency due to faster heat exchange, higher energy density and simpler power cycle infrastructure. Supercritical carbon dioxide blends, that are currently in development, can further increase efficiency by optimizing its critical pressure and temperature points. Although turbines are most common in commercial power generation, smaller generators can be powered by gasoline or diesel engines. These may used for backup generation or as a prime source of power within isolated villages. Production Total worldwide gross production of electricity in 2016 was 25 082 TWh. Sources of electricity were coal and peat 38.3%, natural gas 23.1%, hydroelectric 16.6%, nuclear power 10.4%, oil 3.7%, solar/wind/geothermal/tidal/other 5.6%, biomass and waste 2.3%.In 2021, Wind and solar generated electricity reached 10% of globally produced electricity. Clean sources (Solar and wind and other) generated 38% of the world's electricity. Historical results of production of electricity Production by country The United States has long been the largest producer and consumer of electricity, with a global share in 2005 of at least 25%, followed by China, Japan, Russia, and India. In 2011, China overtook the United States to become the largest producer of electricity. Environmental concerns Variations between countries generating electrical power affect concerns about the environment. In France only 10% of electricity is generated from fossil fuels, the US is higher at 70% and China is at 80%. The cleanliness of electricity depends on its source. Methane leaks (from natural gas to fuel gas-fired power plants) and carbon dioxide emissions from fossil fuel-based electricity generation account for a significant portion of world greenhouse gas emissions. In the United States, fossil fuel combustion for electric power generation is responsible for 65% of all emissions of sulfur dioxide, the main component of acid rain. Electricity generation is the fourth highest combined source of NOx, carbon monoxide, and particulate matter in the US.According to the International Energy Agency (IEA), low-carbon electricity generation needs to account for 85% of global electrical output by 2040 in order to ward off the worst effects of climate change. Like other organizations including the Energy Impact Center (EIC) and the United Nations Economic Commission for Europe (UNECE), the IEA has called for the expansion of nuclear and renewable energy to meet that objective. Some, like EIC founder Bret Kugelmass, believe that nuclear power is the primary method for decarbonizing electricity generation because it can also power direct air capture that removes existing carbon emissions from the atmosphere. Nuclear power plants can also create district heating and desalination projects, limiting carbon emissions and the need for expanded electrical output.A fundamental issue regarding centralised generation and the current electrical generation methods in use today is the significant negative environmental effects that many of the generation processes have. Processes such as coal and gas not only release carbon dioxide as they combust, but their extraction from the ground also impacts the environment. Open pit coal mines use large areas of land to extract coal and limit the potential for productive land use after the excavation. Natural gas extraction releases large amounts of methane into the atmosphere when extracted from the ground greatly increase global greenhouse gases. Although nuclear power plants do not release carbon dioxide through electricity generation, there are risks associated with nuclear waste and safety concerns associated with the use of nuclear sources. Per unit of electricity generated coal and gas-fired power life-cycle greenhouse gas emissions are almost always at least ten times that of other generation methods. Centralised and distributed generation Centralised generation is electricity generation by large-scale centralised facilities, sent through transmission lines to consumers. These facilities are usually located far away from consumers and distribute the electricity through high voltage transmission lines to a substation, where it is then distributed to consumers; the basic concept being that multi-megawatt or gigawatt scale large stations create electricity for a large number of people. The vast majority of electricity used is created from centralised generation. Most centralised power generation comes from large power plants run by fossil fuels such as coal or natural gas, though nuclear or large hydroelectricity plants are also commonly used. Centralised generation is fundamentally the opposite of distributed generation. Distributed generation is the small-scale generation of electricity to smaller groups of consumers. This can also include independently producing electricity by either solar or wind power. In recent years distributed generation as has seen a spark in popularity due to its propensity to use renewable energy generation methods such as rooftop solar. Technologies Centralised energy sources are large power plants that produce huge amounts of electricity to a large number of consumers. Most power plants used in centralised generation are thermal power plants meaning that they use a fuel to heat steam to produce a pressurised gas which in turn spins a turbine and generates electricity. This is the traditional way of producing energy. This process relies on several forms of technology to produce widespread electricity, these being natural coal, gas and nuclear forms of thermal generation. More recently solar and wind have become large scale. Solar Wind Coal Natural gas Natural gas is ignited to create pressurised gas which is used to spin turbines to generate electricity. Natural gas plants use a gas turbine where natural gas is added along with oxygen which in turn combusts and expands through the turbine to force a generator to spin. Natural gas power plants are more efficient than coal power generation, they however contribute to climate change but not as highly as coal generation. Not only do they produce carbon dioxide from the ignition of natural gas, but also the extraction of gas when mined releases a significant amount of methane into the atmosphere. Nuclear Nuclear power plants create electricity through steam turbines where the heat input is from the process of nuclear fission. Currently, nuclear power produces 11% of all electricity in the world. Most nuclear reactors use uranium as a source of fuel. In a process called nuclear fission, energy, in the form of heat, is released when nuclear atoms are split. Electricity is created through the use of a nuclear reactor where heat produced by nuclear fission is used to produce steam which in turn spins turbines and powers the generators. Although there are several types of nuclear reactors, all fundamentally use this process.Normal emissions due to nuclear power plants are primarily waste heat and radioactive spent fuel. In a reactor accident, significant amounts of radioisotopes can be released to the environment, posing a long term hazard to life. This hazard has been a continuing concern of environmentalists. Accidents such as the Three Mile Island accident, Chernobyl disaster and the Fukushima nuclear disaster illustrate this problem. Electricity generation capacity by country The table lists 45 countries with their total electricity capacities. The data is from 2022. According to the Energy Information Administration, the total global electricity capacity in 2022 was nearly 8.9 terawatt (TW), more than four times the total global electricity capacity in 1981. The global average per-capita electricity capacity was about 1,120 watts in 2022, nearly two and a half times the global average per-capita electricity capacity in 1981. Iceland has the highest installed capacity per capita in the world, at about 8,990 watts. All developed countries have an average per-capita electricity capacity above the global average per-capita electricity capacity, with the United Kingdom having the lowest average per-capita electricity capacity of all other developed countries. See also Glossary of power generation Cogeneration: the use of a heat engine or power station to generate electricity and useful heat at the same time. Cost of electricity by source Diesel generator Engine-generator World energy supply and consumption Generation expansion planning == References ==
sustainable development goal 13
Sustainable Development Goal 13 (SDG 13 or Global Goal 13) is to limit and adapt to climate change. It is one of 17 Sustainable Development Goals established by the United Nations General Assembly in 2015. The official mission statement of this goal is to "Take urgent action to combat climate change and its impacts". SDG 13 and SDG 7 on clean energy are closely related and complementary.: 101 SDG 13 has five targets which are to be achieved by 2030. They cover a wide range of issues surrounding climate action. The first three targets are outcome targets: Strengthen resilience and adaptive capacity to climate-related disasters; integrate climate change measures into policies and planning; build knowledge and capacity to meet climate change. The remaining two targets are means of implementation targets: To implement the UN Framework Convention on Climate Change (UNFCCC), and to promote mechanisms to raise capacity for planning and management. Along with each target, there are indicators that provide a method to review the overall progress of each target. The UNFCCC is the primary international, intergovernmental forum for negotiating the global response to climate change. Under the 2015 Paris Agreement, nations collectively agreed to keep warming "well under 2 °C". However, with pledges made under the Agreement, global warming would still reach about 2.7 °C (4.9 °F) by the end of the century.As of 2020, many countries are now implementing their national climate change adaptation plans.: 15 Background SDG 13 intends to take urgent action in order to combat climate change and its impacts.Climate change threatens people with increased flooding, extreme heat, increased food and water scarcity, more disease, and economic loss. Human migration and conflict can also be a result.Many climate change impacts are already felt at the current 1.2 °C (2.2 °F) level of warming. Additional warming will increase these impacts and can trigger tipping points, such as the melting of the Greenland ice sheet. Under the 2015 Paris Agreement, nations collectively agreed to keep warming "well under 2 °C". However, with pledges made under the Agreement, global warming would still reach about 2.7 °C (4.9 °F) by the end of the century.Reducing emissions requires generating electricity from low-carbon sources rather than burning fossil fuels. This change includes phasing out coal and natural gas fired power plants, vastly increasing use of wind, solar, and other types of renewable energy, and reducing energy use. Targets, indicators and progress SDG 13 has five targets. The targets include to strengthening resilience and adaptive capacity to climate-related disasters (Target 13.1), integrate climate change measures into policies and planning (Target 13.2), build knowledge and capacity to meet climate change (Target 13.3), implement the UN Framework Convention on Climate Change (Target 13.a), and promote mechanisms to raise capacity for planning and management (Target 13.b).Each target includes one or more indicators that help to measure and monitor the progress. Some of the indicators are number of deaths, missing people and directly affected people attributed to disasters per 100,000 population (13.1.1) or total greenhouse emissions generated by year (13.2.2.) Target 13.1: Strengthen resilience and adaptive capacity to climate-related disasters The full text of Target 13.1 is: "Strengthen resilience and adaptive capacity to climate-related hazards and natural disasters in all countries".This target has 3 indicators. Indicator 13.1.1: "Number of deaths, missing people and directly affected people attributed to disasters per 100,000 population" Indicator 13.1.2: "Number of countries that adopt and implement national disaster risk reduction strategies in line with the Sendai Framework for Disaster Risk Reduction 2015–2030" Indicator 13.1.3: "Proportion of local governments that adopt and implement local disaster risk reduction strategies in line with national disaster risk reduction strategies"Indicator 13.1.2 serves as a bridge between the Sustainable Development Goals and the Sendai Framework for Disaster Risk Reduction. In April 2020, the number of countries and territories that adopted national disaster risk reduction strategies increased to 118 compared to 48 from the first year of the Sendai Framework. Target 13.2: Integrate climate change measures into policy and planning The full text of Target 13.2 is: "Integrate climate change measures into national policies, strategies and planning".This target has two indicators: Indicator 13.2.1: "Number of countries with nationally determined contributions, long-term strategies, national adaptation plans, strategies as reported in adaptation communications and national communications". Indicator 13.2.2: "Total greenhouse gas emissions per year"In order to stay under 1.5 °C of global warming, carbon dioxide (CO₂) emissions from G20 countries need to decline by about 45% by 2030 and attain net zero in 2050. To be able to meet the 1.5 °C or even 2 °C, which is the maximum set by the Paris Agreement, greenhouse gas emissions must start to fall by 7.6% per year starting on 2020. However, there is a large gap between these overall temperature targets and the nationally determined contributions set by individual countries. Between 2000 and 2018, greenhouse gas emissions of transition economies and developed countries have declined by 6.5%. In contrast, developing countries saw their emissions go up by 43% between 2000 and 2013.As of 2015, 170 countries are a part of at least one multilateral environmental agreement. With each year having an increase in the amount of countries signing onto environmental agreements. Target 13.3: Build knowledge and capacity to meet climate change The full text of Target 13.3 is: "Improve education, awareness-raising and human and institutional capacity on climate change mitigation, adaptation, impact reduction and early warning".This target has two indicators: Indicator 13.3.1: "The extent to which (i) global citizenship education and (ii) education for sustainable development are mainstreamed in (a) national education policies; (b) curricula; (c) teacher education; and (d) student assessment" Indicator 13.3.2: "Number of countries that have communicated the strengthening of institutional, systemic and individual capacity-building to implement adaptation, mitigation and technology transfer, and development actions"The indicator 13.3.1 measures the extent to which countries mainstream Global Citizenship Education (GCED) and Education for Sustainable Development (ESD) in their education systems and educational policies.The indicator 13.3.2 identifies countries who have and have not adopted and implemented disaster risk management strategies in line with the Sendai Framework for Disaster Risk Reduction. The goal by 2030 is to strengthen resilience and adaptive capacity to climate-related hazards and natural disasters in all countries.To explain the concept of "Education for Sustainable Development and Global Citizenship seeks to equip learners with the knowledge of how their choices impact others and their immediate environment.There is currently no data available for this indicator as of September 2020. Target 13.a: Implement the UN Framework Convention on Climate Change The full text of Target 13.a is: "Implement the commitment undertaken by developed-country parties to the United Nations Framework Convention on Climate Change to a goal of mobilizing jointly $100 billion annually by 2020 from all sources to address the needs of developing countries in the context of meaningful mitigation actions and transparency on implementation and fully operationalize the Green Climate Fund through its capitalization as soon as possible."This target only has one indicator: Indicator 13.a is the "Amounts provided and mobilized in United States dollars per year in relation to the continued existing collective mobilization goal of the $100 billion commitment through to 2025".Previously, the indicator was worded as "Mobilized amount of United States dollars per year between 2020 and 2025 accountable towards the $100 billion commitment".This indicator measures the current pledged commitments from countries to the Green Climate Fund (GCF), the amounts provided and mobilized in United States dollars (USD) per year in relation to the continued existing collective mobilization goal of the US$100 billion commitment to 2025.A report by the UN stated in 2020 that the financial flows for global climate finance as well as for renewable energy are "relatively small in relation to the scale of annual investment needed for a low-carbon, climate-resilient transition".: 15 Target 13.b: Promote mechanisms to raise capacity for planning and management The full text of Target 13.b is: "Promote mechanisms for raising capacity for effective climate change-related planning and management in least developed countries and small island developing States, including focusing on women, youth and local and marginalized communities acknowledging that the United Nations Framework Convention on Climate Change is the primary international, intergovernmental forum for negotiating the global response to climate change."This target has one indicator: Indicator 13.b.1 is the "Number of least developed countries and small island developing states with nationally determined contributions, long-term strategies, national adaptation plans, strategies as reported in adaptation communications and national communications". A previous version of this indicator was: "Indicator 13.b.1: Number of least developed countries and small island developing states that are receiving specialized support, and amount of support, including finance, technology and capacity building, for mechanisms for raising capacities for effective climate change-related planning and management, including focusing on women, youth and local and marginalized communities." This indicator's previous focus on women, youth and local and marginalized communities is not included anymore in the latest version of the indicator. Annual UN reports are monitoring how many countries are implementing national adaptation plans.: 15 Custodian agencies Custodian agencies are in charge of reporting on the following indicators: Indicators 13.1.1, 13.1.2 and 13.1.3: UN International Strategy for Disaster Reduction (UNISDR). Indicator 13.2.1: United Nations Framework Convention on Climate Change (UNFCCC), UN Educational, Scientific, and Cultural Organization-Institute for Statistics (UNESCO-UIS). Indicators 13.3.1, 13.a.1 and 13.b.1: United Nations Framework Convention on Climate Change (UNFCCC) and Organization for Economic Cooperation and Development (OECD). Monitoring High-level progress reports for all the SDGs are published in the form of reports by the United Nations Secretary General. Updates and progress can also be found on the SDG website that is managed by the United Nations and at Our World in Data. Challenges Impacts of the COVID-19 pandemic During the COVID-19 pandemic, there was a reduction in economic activity. This resulted in a 6% drop in greenhouse gas emissions from what was initially projected for 2020, however these improvements were only temporary. Greenhouse gas emissions rebounded later in the pandemic as many countries began lifting restrictions, with the direct impact of pandemic policies having a negligible long-term impact on climate change. A rebound in transport pollution has occurred since restrictions of government lockdown policies have been lifted. Transport pollution accounts for roughly 21% of global carbon emissions due to it being still 95% dependent on oil.Post pandemic, there is a rush for governments globally to stimulate local economies by putting money towards fossil fuel production and in turn economic stimulation. Funding for economic policies will likely divert the emergency funds usually afforded to climate funding like The Green Climate Fund and other sustainable policies, unless an emphasis is put on green deals in the redirection of monetary funds. Russian invasion of Ukraine The Russian invasion of Ukraine and the resulting trade sanctions had a further adverse effect on SDG 13, as some countries responded to the crisis by increasing domestic oil production. Links with other SDGs Sustainable Development Goal 13 can connect with the other 16 SDGs The level of climate ambition a country can feasibly agree to (SDG 13) is often corresponds to their level of poverty (SDG 1).: 2  Increasing access to sustainable energy (SDG 7) will reduce greenhouse gas emissions, a large pillar of climate action. : 101  Finding partnerships for the SDGs (SDG 17) by nature will apply and connect the 17 goals together. LinkedSDG The United Nations created a platform called LinkedSDG designed to make information about the Sustainable Development goals more accessible to stakeholders and the general public via charts and infographics. The platform can be accessed at linkedsdg.org. Organizations United Nations organizations Climate target United Nations Framework Convention on Climate Change (UNFCC) Intergovernmental Panel on Climate Change (IPCC) Conferences of the Parties (COP) World Meteorological Organization (WMO) UN-Habitat United Nations Environment Program (UNEP) Green Climate Fund (GCF) References Sources Cattaneo, Cristina; Beine, Michel; Fröhlich, Christiane J.; Kniveton, Dominic; et al. (2019). "Human Migration in the Era of Climate Change". Review of Environmental Economics and Policy. 13 (2): 189–206. doi:10.1093/reep/rez008. hdl:10.1093/reep/rez008. ISSN 1750-6816. S2CID 198660593. IPCC (2022). Pörtner, H.-O.; Roberts, D.C.; Tignor, M.; Poloczanska, E.S.; Mintenbeck, K.; Alegría, A.; Craig, M.; Langsdorf, S.; Löschke, S.; Möller, V.; Okem, A.; Rama, B.; et al. (eds.). Climate Change 2022: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. United Nations Environment Programme (2021). Emissions Gap Report 2021 (PDF). Nairobi. ISBN 978-92-807-3890-2.{{cite book}}: CS1 maint: location missing publisher (link) Arias, Paola A.; Bellouin, Nicolas; Coppola, Erika; Jones, Richard G.; et al. (2021). "Technical Summary" (PDF). IPCC AR6 WG1 2021. IPCC (2021). Masson-Delmotte, V.; Zhai, P.; Pirani, A.; Connors, S. L.; et al. (eds.). Climate Change 2021: The Physical Science Basis (PDF). Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, United Kingdom and New York, NY, USA: Cambridge University Press (In Press). External links UN Sustainable Development Knowledge Platform – SDG 13 “Global Goals” Campaign - SDG 13 SDG-Track.org - SDG 13 UN SDG 13 in the US
tundra
In physical geography, tundra () is a type of biome where tree growth is hindered by frigid temperatures and short growing seasons. The term tundra comes through Russian тундра (tundra) from the Kildin Sámi word тӯндар (tūndâr) meaning "uplands", "treeless mountain tract". There are three regions and associated types of tundra: Arctic tundra, alpine tundra, and Antarctic tundra.Tundra vegetation is composed of dwarf shrubs, sedges, grasses, mosses, and lichens. Scattered trees grow in some tundra regions. The ecotone (or ecological boundary region) between the tundra and the forest is known as the tree line or timberline. The tundra soil is rich in nitrogen and phosphorus. The soil also contains large amounts of biomass and decomposed biomass that has been stored as methane and carbon dioxide in the permafrost, making the tundra soil a carbon sink. As global warming heats the ecosystem and causes soil thawing, the permafrost carbon cycle accelerates and releases much of these soil-contained greenhouse gases into the atmosphere, creating a feedback cycle that increases climate change. Arctic Arctic tundra occurs in the far Northern Hemisphere, north of the taiga belt. The word "tundra" usually refers only to the areas where the subsoil is permafrost, or permanently frozen soil. (It may also refer to the treeless plain in general so that northern Sápmi would be included.) Permafrost tundra includes vast areas of northern Russia and Canada. The polar tundra is home to several peoples who are mostly nomadic reindeer herders, such as the Nganasan and Nenets in the permafrost area (and the Sami in Sápmi). Arctic tundra contains areas of stark landscape and is frozen for much of the year. The soil there is frozen from 25 to 90 cm (10 to 35 in) down, making it impossible for trees to grow. Instead, bare and sometimes rocky land can only support certain kinds of Arctic vegetation, low-growing plants such as moss, heath (Ericaceae varieties such as crowberry and black bearberry), and lichen. There are two main seasons, winter and summer, in the polar tundra areas. During the winter it is very cold, dark, and windy with the average temperature around −28 °C (−18 °F), sometimes dipping as low as −50 °C (−58 °F). However, extreme cold temperatures on the tundra do not drop as low as those experienced in taiga areas further south (for example, Russia's and Canada's lowest temperatures were recorded in locations south of the tree line). During the summer, temperatures rise somewhat, and the top layer of seasonally-frozen soil melts, leaving the ground very soggy. The tundra is covered in marshes, lakes, bogs, and streams during the warm months. Generally daytime temperatures during the summer rise to about 12 °C (54 °F) but can often drop to 3 °C (37 °F) or even below freezing. Arctic tundras are sometimes the subject of habitat conservation programs. In Canada and Russia, many of these areas are protected through a national Biodiversity Action Plan. Tundra tends to be windy, with winds often blowing upwards of 50–100 km/h (30–60 mph). However, it is desert-like, with only about 150–250 mm (6–10 in) of precipitation falling per year (the summer is typically the season of maximum precipitation). Although precipitation is light, evaporation is also relatively minimal. During the summer, the permafrost thaws just enough to let plants grow and reproduce, but because the ground below this is frozen, the water cannot sink any lower, so the water forms the lakes and marshes found during the summer months. There is a natural pattern of accumulation of fuel and wildfire which varies depending on the nature of vegetation and terrain. Research in Alaska has shown fire-event return intervals (FRIs) that typically vary from 150 to 200 years, with dryer lowland areas burning more frequently than wetter highland areas. The biodiversity of tundra is low: 1,700 species of vascular plants and only 48 species of land mammals can be found, although millions of birds migrate there each year for the marshes. There are also a few fish species. There are few species with large populations. Notable plants in the Arctic tundra include blueberry (Vaccinium uliginosum), crowberry (Empetrum nigrum), reindeer lichen (Cladonia rangiferina), lingonberry (Vaccinium vitis-idaea), and Labrador tea (Rhododendron groenlandicum). Notable animals include reindeer (caribou), musk ox, Arctic hare, Arctic fox, snowy owl, ptarmigan, northern red-backed voles, lemmings, the mosquito, and even polar bears near the ocean. Tundra is largely devoid of poikilotherms such as frogs or lizards. Due to the harsh climate of Arctic tundra, regions of this kind have seen little human activity, even though they are sometimes rich in natural resources such as petroleum, natural gas, and uranium. In recent times this has begun to change in Alaska, Russia, and some other parts of the world: for example, the Yamalo-Nenets Autonomous Okrug produces 90% of Russia's natural gas. Relationship to climate change A severe threat to tundra is global warming, which causes permafrost to thaw. The thawing of the permafrost in a given area on human time scales (decades or centuries) could radically change which species can survive there. It also represents a significant risk to infrastructure built on top of permafrost, such as roads and pipelines. In locations where dead vegetation and peat have accumulated, there is a risk of wildfire, such as the 1,039 km2 (401 sq mi) of tundra which burned in 2007 on the north slope of the Brooks Range in Alaska. Such events may both result from and contribute to global warming. Greenhouse gas emissions Antarctic Antarctic tundra occurs on Antarctica and on several Antarctic and subantarctic islands, including South Georgia and the South Sandwich Islands and the Kerguelen Islands. Most of Antarctica is too cold and dry to support vegetation, and most of the continent is covered by ice fields or cold deserts. However, some portions of the continent, particularly the Antarctic Peninsula, have areas of rocky soil that support plant life. The flora presently consists of around 300–400 species of lichens, 100 mosses, 25 liverworts, and around 700 terrestrial and aquatic algae species, which live on the areas of exposed rock and soil around the shore of the continent. Antarctica's two flowering plant species, the Antarctic hair grass (Deschampsia antarctica) and Antarctic pearlwort (Colobanthus quitensis), are found on the northern and western parts of the Antarctic Peninsula. In contrast with the Arctic tundra, the Antarctic tundra lacks a large mammal fauna, mostly due to its physical isolation from the other continents. Sea mammals and sea birds, including seals and penguins, inhabit areas near the shore, and some small mammals, like rabbits and cats, have been introduced by humans to some of the subantarctic islands. The Antipodes Subantarctic Islands tundra ecoregion includes the Bounty Islands, Auckland Islands, Antipodes Islands, the Campbell Island group, and Macquarie Island. Species endemic to this ecoregion include Corybas dienemus and Corybas sulcatus, the only subantarctic orchids; the royal penguin; and the Antipodean albatross.There is some ambiguity on whether Magellanic moorland, on the west coast of Patagonia, should be considered tundra or not. Phytogeographer Edmundo Pisano called it tundra (Spanish: tundra Magallánica) since he considered the low temperatures key to restrict plant growth.The flora and fauna of Antarctica and the Antarctic Islands (south of 60° south latitude) are protected by the Antarctic Treaty. Alpine Alpine tundra does not contain trees because the climate and soils at high altitude block tree growth.: 51  The cold climate of the alpine tundra is caused by the low air temperatures, and is similar to polar climate. Alpine tundra is generally better drained than arctic soils. Alpine tundra transitions to subalpine forests below the tree line; stunted forests occurring at the forest-tundra ecotone (the treeline) are known as Krummholz. Alpine tundra occurs in mountains worldwide. The flora of the alpine tundra is characterized by plants that grow close to the ground, including perennial grasses, sedges, forbs, cushion plants, mosses, and lichens. The flora is adapted to the harsh conditions of the alpine environment, which include low temperatures, dryness, ultraviolet radiation, and a short growing season. Climatic classification Tundra climates ordinarily fit the Köppen climate classification ET, signifying a local climate in which at least one month has an average temperature high enough to melt snow (0 °C (32 °F)), but no month with an average temperature in excess of 10 °C (50 °F). The cold limit generally meets the EF climates of permanent ice and snows; the warm-summer limit generally corresponds with the poleward or altitudinal limit of trees, where they grade into the subarctic climates designated Dfd, Dwd and Dsd (extreme winters as in parts of Siberia), Dfc typical in Alaska, Canada, mountain areas of Scandinavia, European Russia, and Western Siberia (cold winters with months of freezing).Despite the potential diversity of climates in the ET category involving precipitation, extreme temperatures, and relative wet and dry seasons, this category is rarely subdivided. Rainfall and snowfall are generally slight due to the low vapor pressure of water in the chilly atmosphere, but as a rule potential evapotranspiration is extremely low, allowing soggy terrain of swamps and bogs even in places that get precipitation typical of deserts of lower and middle latitudes. The amount of native tundra biomass depends more on the local temperature than the amount of precipitation. Places featuring a tundra climate See also Alas Fellfield List of tundra ecoregions from the WWF Mammoth steppe Park Tundra References Further reading External links WWF Tundra Ecoregions Archived 23 February 2010 at the Wayback Machine The Arctic biome at Classroom of the Future Arctic Feedbacks to Global Warming: Tundra Degradation in the Russian Arctic British Antarctica Survey Antarctica: West of the Transantarctic Mountains World Map of Tundra
brunner island steam electric station
Brunner Island Steam Electric Station is a coal-fired, alternatively natural gas-powered electrical generation facility in York County, Pennsylvania. It occupies most of the area of the eponymous island on Susquehanna River. The power plant has three major units, which came online in 1961, 1965, and 1969, with respective generating capacities of 334 MW, 390 MW, and 759 MW (in winter conditions). In addition, three internal combustion generators (2.8 MWe each) were installed in 1967. Talen Energy will stop coal use at the plant in 2028. Environmental impact PPL, the owner of the plant at the time, announced in 2005 that it would begin to install scrubbers at the plant and that installation would be complete by 2009. The scrubbers, PPL says, are intended to annually remove 100,000 tons of sulfur. The facility was cited as one of several facilities in the region by a USA Today study of air quality around area schools as a potential source of significant pollutants. Fly ash from the Brunner Island facility is approved for use in construction projects, especially for "use in concrete mixes to reduce alkali silica reactivity of aggregate." Greenhouse gas emissions In 2021, the facility produced 2.28 megatonnes of CO2 equivalent (tCO2e) greenhouse gas emissions. This is the same climate impact as 491,312 gasoline-powered passenger vehicles driven for one year. With respect to greenhouse gas emissions, out of 89 power stations in the state, Brunner Island ranks as the 13th most polluting. Sulphur dioxide emissions In 2006, Brunner Island ranked 27th on the list of most-polluting major power station in the US in terms of sulphur dioxide gas emission rate: it discharged 20.49 pounds (9.29 kg) of SO2 for each MWh of electric power produced that year (93,545 tons of SO2 per year in total). Scrubbers began operation in 2009, removing about 90-percent of sulfur dioxide emissions, and they reduce mercury emissions. They spray a mixture of crushed limestone and water onto the exhaust gas before it goes out the plant's chimney. Sulfur reacts with the limestone and water in the plant's exhaust, forming synthetic gypsum. This is collected and shipped to a drywall manufacturing company. Waste heat Brunner Island discharges all of its waste heat (about 1.44 times its electrical output) into its brand new cooling towers as of 2009. Conversion to Natural Gas As part of a 2018 out-of-court settlement with the Sierra Club, which had previously sued the plant and its current owner, Talen Energy, over air and water pollution, Brunner Island will eventually completely phase out coal. By 2023, Brunner Island will stop burning coal from May to September, which is considered peak smog season. By 2028, the facility will have completely switched over to natural gas. See also List of power stations in Pennsylvania == References ==
dairy farming in canada
Dairy farming is one of the largest agricultural sectors in Canada. Dairy has a significant presence in all of the provinces and is one of the top two agricultural commodities in seven out of ten provinces.In 2018, there were 967,700 dairy cows on 10,679 farms across the country. Quebec and Ontario are the major dairy producing provinces, with 5,120 and 3,534 farms, which produce 37% and 33% of Canada's total milk. This is supposed to represent 8% of farmers in Canada. While dairy farming is still prominent in Canadian society, the number of dairy farms in Canada has been dropping significantly since 1971 while the size of the average farm has significantly increased to 89 cows per farm.The Canadian dairy sector contributes approximately $19.9 billion yearly to Canada's GDP, and sustains approximately 221,000 full-time equivalent jobs and generates $3.8 billion in tax revenues. On average, two-thirds of Canadian dairy produced is sold as fluid milk while the remaining one-third is refined into other dairy products such as milk, cheese and butter.In Canada, dairy farming is subject to the system of supply management. Under supply management, which also includes the egg and poultry sectors, farmers manage their production so that it coincides with forecasts of demand for their products over a predetermined period - while taking into account certain imports that enter Canada, as well as some production which is shipped to export markets. Imports of dairy, eggs, and poultry are controlled using tariff rate quotas, or TRQs. These allow a predetermined quantity to be imported at preferential tariff rates (generally duty free), while maintaining control over how much is imported. The over-quota tariffs are set at levels where practically no dairy products are sold to Canada above the quotas. That should allow Canadian farmers to receive a price reflecting the cost to produce in the country.There has been pushback regarding the supply management system, and research indicates that the Canadian population generally have varied views with the current system. The Dairy Farmers of Canada, a dairy advocacy group, claims that the system is necessary for farmers to provide quality milk to consumers. History The Canadian Dairy Farmers' Federation was founded in 1934. The group became Dairy Farmers of Canada in 1942, and its mandate was to stabilize the dairy market and increase revenues for dairy farmers. In the face of lobbying, government programs were instituted in the 1940s and 1950s to increase prices and limit imports. 1958 saw the creation of the Agricultural Stabilization Board, though it was not limited to dairy. In the 1950s and 1960s there was significant volatility in dairy prices; dairy producers were seen as having too much bargaining power relative to dairy farmers, and the United Kingdom was poised to enter the European Common Market, resulting in the loss of Canada's largest dairy export partner. These challenges led to the creation of the Canadian Dairy Commission, whose mandate was to ensure the quality and supply of milk, that producers received a "fair" return on investment, and set prices based on production costs, market price, consumer's ability to pay, and current economic conditions. 2021 "Buttergate" In 2021, Canadian dairy received national and international attention due to an alleged change in texture of Canadian butter. Consumers also claimed that the butter was not softening at room temperature. Dubbed Buttergate, the controversy began with a column in the Globe and Mail, asserting that among other factors, that the use of palmitic oil, derived from palm oil, as a feed supplement was causing the change in texture of butter. Demand for butter in Canada increased during the Covid-19 Pandemic, and farmers were supposedly using palmitic oil to increase yields. A wider discussion was sparked about dairy in Canada, with strong opinions about the use of palmitic oil from some such as Professor Sylvain Charlebois of Dalhousie University. While some academics and scientists rejected the palmitic oil claims due to a lack of hard evidence, subsequent studies did provide new evidence palmatic acids can make butter harder at room temperature. Statistics Snapshot of the Canadian dairy industry Supply management The government of Canada put in place a supply management system during the early 1970's as an effort to reduce the surplus in production that had become common in the 1950s and 1960s and to "ensure" a fair return for farmers. Supply management is a shared jurisdiction between the Federal and Provincial governments. There is the Canada-wide Canadian Dairy Commission, composed mostly of dairy farmers, while in Ontario there is the Dairy Farmers of Ontario. Other provinces also have similar local boards.In 1983, the National Milk Marketing Plan came into effect to control supply, setting guidelines for calculating Market Sharing Quota. This agreement is between the federal and provincial powers. The Milk Marketing Plan was created to replace the Comprehensive Milk Marketing Agreement, which was initially established in 1971. By 1983, every province except Newfoundland had signed onto the Milk Marketing Agreement. Following dairy, a national supply management system was implemented for eggs in 1972, turkey in 1974, chicken in 1978 and chicken hatching eggs in 1986.Supply management attempts to manage production so that supply is in balance with demand, and the farm gate price enables farmers to cover their costs of production, including a return on labour and capital. Each farm owns a number of shares in the market (quota), and is required to increase or decrease production according to consumer demand. Because production is in sync with demand, farmers avoid overproduction and earn a predictable and stable revenue, directly from the market.Canada's supply management system for dairy products benefits Canadian dairy farmers. The consequence of such a system is artificially higher dairy prices in Canada, which may be the reason that some individuals are consuming fewer dairy products in favour of alternative products, such as almond or soy milk.There is concern regarding the impact that supply management has on political influence, given that the number of dairy farmers in Canada has been significantly dropping since 1971, the percentage of dairy farmers compared to other farmers in Canada, the amount spent to protect the system and the tactics used, the electoral clout that dairy farmers have on elections, as well as the fact that the average dairy farmer have become significantly wealthier in term of net worth. These groups also feel that the system should be abolished in order to increase food manufacturing, reduce food waste, reduce poverty and prevent future food shortages. In addition, the Canadian dairy system makes Canadian dairy farmers unable to participate in the global dairy market potentially limiting their expansion if they could compete with artificially low international milk prices and should be done away with in light of Canada's commitment to free trade. Regulations Canadian dairy farmers follow regulations outlined by the Canadian Food Inspection Agency to ensure proper oversight of dairy production to ensure biosecurity standards are maintained in the sectors of environmental protection, human health, animal health, and animal welfare. CFIA biosecurity standards are voluntary. In adhering to these regulations, dairy farmers can make certain that dairy standards are sustained. Under the 2015 TPP negotiations it was revealed that Health Canada had not found evidence of adverse health effects in humans from the consumption of recombinant bovine somatotrophin (rBST) growth hormone products. Without a labeling requirement, if Canadians chose to only consume Canadian dairy products in order to avoid consuming rBST, there would be no way of knowing the origins of milk ingredients. Processed food sold in Canada could contain ingredients from cows from the U.S. that were treated with rBST. Animal Welfare The main welfare issues regarding Canadian dairy production include the immediate separation of calves from their mothers, the isolation and confinement of male calves, various painful invasive procedures, lameness, confined living conditions, rough handling practices, stressful transportation environments, pre-slaughter conditions, and the slaughter itself.A 2018 review of Canadian dairy farms found that many dairy cows intended to be slaughtered, known as cull dairy cows, are transported to widely dispersed and specialized slaughter plants, and they may experience multiple handling events (e.g., loading, unloading, mixing), change of ownership among dealers, and feed and water deprivation during transport and at livestock markets.According to the Canadian Veterinary Welfare Association, dairy cows that are considered to be of low or reduced economic value are removed (culled) from the herd for a variety of reasons including reproductive issues (e.g., fertility), low milk production, mastitis, lameness, and other forms of ill-health. Cull dairy cows may be in poor condition and as such may be at greater risk of suffering during standard transport and slaughter.The Canadian dairy industry is often criticized by animal rights and animal welfare groups, such as the Society for the Prevention of Cruelty to Animals, Canadians for the Ethical Treatment of Animals, Mercy for Animals, and Humane Canada.Alberta Milk, an industry advocacy group, argues that the separation of calves from their mothers is not unethical because quickly separating calves results in a much smaller risk of sickness and the mother quickly forgets about her child. However, a 2019 review found no consistent evidence in support of early separation for cow and calf health, and a 2008 review states that early weaning causes distress to both cow and calf.The Ontario Ministry of Agriculture is currently in favour of dehorning and disbudding, stating that it provides economic benefits and increases safety. It also takes the position that dehorning and disbudding without anaesthesia is inhumane and unethical, but there is no requirement for anaesthesia use under the Ontario Society for the Prevention of Cruelty to Animals Act. No dairy industry practices are prohibited in the Criminal Code of Canada, including painful invasive procedures done without the use of painkillers. A 2007 review stated that dehorning and similar mutilations are not necessary for safety.“ProAction” is a program started in 2010 by the Dairy Farmers of Canada, an industry governing body. It is a mandatory program which regulates farm practices regarding a wide range of food safety, environmental concerns, and animal welfare concerns, including anaesthesia, euthanasia, tail docking, animal handling, and animal hygiene. Continued non-compliance results in progressive penalties, such as fines, and eventually results in suspension of milk pickup Environmental impact The Canadian dairy industry is responsible for 20% of total green house gas (GHG) emissions generated in Canada by livestock agriculture, which is made up of the dairy, poultry, swine and beef industry. 90% of the GHG emissions caused by Canadian dairy farming occurs as a result of events on the farm, whereas only 10% GHG emissions are emitted as a result of off farm processes such as the producing and refining processes. The greatest amount of GHG that is produced by Canadian dairy cows occurs at the time of lactation.GHG emissions from dairy farms in Western Canada are typically lower than in Eastern Canada, primarily as a function of climate and raw milk production processes in comparison to the climate and milk production processes utilized in Eastern Canada. Consequently, the Eastern provinces of Canada contribute to 78.5% of GHG emissions created by the Canadian dairy farming industry. Feed sources and greenhouse gas emissions The type of feed utilized by Canadian dairy farmers significantly affects the amount of GHG emissions as a result of dairy production. Canadian dairy farmers commonly feed their cattle corn or barley silage as high nutrient food sources to increase milk production. Although corn and barley are both efficient and economic sources of feed, these two feed sources are responsible for large amounts of greenhouse gas (GHG) emission in Canada. While both of these types of feed contribute to significant amounts of GHG, research reveals that corn produces lower amounts of GHG in comparison to barley. In examining the use of these two types of feed, comparison of measurements of CH4, N2O and CO2 suggests that total GHG emission in Canada produced by a single cow based on amount of milk production is 13% lower when the cow is fed corn compared to barley. Additionally, corn silage feed is attributed to higher milk production across dairy cows compared to barley silage feed.Despite the decrease in GHG in utilizing corn feed for Canadian dairy farms, when examining processing and transportation costs of feed for Canadian dairy farms, corn silage production is responsible for a 9% increase in CO2 compared to the processing and transportation costs associated with barley silage production. Despite higher rates of GHG due to transportation costs, Corn still results in lower rates of GHG overall.While corn and barley are two commonly used types of feed by Canadian dairy farmers, the feed source of the forage, alfalfa, while less commonly used is a feed source that would further decrease GHG emissions, in comparison to corn. Total Mixed Ration Most dairy farms in Canada feed what is called a Total Mixed Ration (TMR). It is the act of combining a variety of feed stuffs into a large mixture that is mixed well and then fed to the cows. These rations vary among farms based on the farms goals and available feed sources. The goal of a TMR is to make every bite a cow eats the exact same so their feed intake can monitored and adjusted accordingly. TMR pose many advantages to the cows health such as increased rumen activity which leads to less acid build up and in turn more feed absorption which leads to higher milk production. Organic farming Costs associated with organic farming are substantially lower than costs incurred by conventional farming. Organic Canadian dairy farms have been shown to have a lower overall cost of production and are more self-sufficient in terms of plant and animal nutrient recycling and restocking of livestock herds. In contrast, the larger economic surplus enjoyed by conventional dairy farms in Canada is often offset by extra costs associated with importing fertilizers, seed, and replacement cattle, making conventional farming no more economically profitable than organic farming.Both organic and conventional dairy farms exist across Canada. Conventional farming is widely perceived as being the more modern and economically successful method of dairy farming in Canada. Organic dairy farming in Canada is far less prevalent primarily due to widely held misconceptions that organic farming is unprofitable and risky, as organic farming is attributed to a significant degree of self-sufficiency for all aspects of production. Conventional farming is perceived as being highly advanced technologically, utilizing efficient fertilizers and automated processes throughout the farm, driving down costs associated with physical labour. See also Agriculture in Canada Supply management (Canada) Cheese in Canada References External links History of dairy farming in Canada
climate change in zimbabwe
Climate change impacts are occurring in Zimbabwe, even though the country's contribution to greenhouse gas emissions is very minimal. Climate change is the result of the Earth's climate undergoing long-term changes due to the release of greenhouse gases like carbon dioxide (CO2) and methane (CH4). These gases trap heat in the atmosphere, leading to global warming and a hotter planet. Human activities, such as the use of fossil fuels (coal, oil, and natural gas), as well as large-scale commercial agriculture and deforestation, are responsible for the release of these greenhouse gases. Greenhouse gas emissions The African continent contributes 2%-3% of global greenhouse gas emissions, which contribute to climate change. Zimbabwe, on the other hand, makes up less than 0.1% of these emissions. Despite its minimal contribution, all African countries have submitted plans to reduce their emissions. In 2015, Zimbabwe committed to reducing its emissions by 33% by the year 2030. However, in 2021, it revised its target to a more ambitious 40% reduction by 2030 across all sectors. This significant improvement demonstrates Zimbabwe's dedication to reducing emissions from all emitting sectors. Fossil CO2 emissions in Zimbabwe totaled 10,062,628 tons in 2016. This represented a decrease of -4.17% compared to the previous year, amounting to -437,903 tons less than the 2015 emissions of 10,500,531 tons. The CO2 emissions per capita in Zimbabwe were 0.70 tons per person in 2016, based on a population of 14,452,704. This signifies a decrease of -0.05 from the previous year's figure of 0.74 CO2 tons per person, reflecting a change of -6.1% in CO2 emissions per capita. Impact on the natural environment Temperature and weather changes The mean annual temperature has been increasing at a rate of approximately 0.01 to 0.02 degrees Celsius per year from 1950 to 2002. According to the Zimbabwe Meteorological Service, minimum daily temperatures have risen by about 2.6 degrees Celsius over the past century, while maximum daily temperatures have increased by 2 degrees Celsius during the same period. Furthermore, there has been a decrease in cold days and nights and an increase in hot days. These changes align with the overall warming trend, with more hot days and nights and fewer cold days and nights observed in recent decades. Impact on water resource Zimbabwe relies mostly on surface water resources, with limited availability of groundwater resources. The country has a significant number of dams, including large ones, with a total capacity of 99,930 m3. However, Zimbabwe's water resources are projected to be severely impacted by climate change. Rainfall simulations in various catchment areas have shown a decrease in precipitation and an increase in evaporation, leading to a projected 50% decrease in runoff by 2075. The Runde and Mzingwane catchments, in particular, are anticipated to face the largest decline in average rainfall. Additionally, wetlands and aquifers' recharge rates are expected to be reduced, impacting water availability for irrigation farming. Furthermore, the demand for water for various purposes is projected to grow due to population, urbanization, industry, and evaporation increases. According to the World Bank, climate change will result in a 38% decline in national per capita water availability by 2050, potentially forcing Zimbabwe's inhabitants to depend more on groundwater sources. Climate change affects water availability and quality, leading to challenges in securing a reliable water supply. Ecosystems Extreme Weather Events. Climate change in Zimbabwe has increased extreme weather events such as droughts, floods, and storms. These events disrupt ecosystems, harm crops, and contribute to soil erosion. Biodiversity Loss. The changes in temperature and precipitation patterns caused by climate change can impact the dispersal and survival of plant and animal species, leading to a decline in biodiversity in Zimbabwe's ecosystems. Zimbabwe's diverse ecosystems face threats from climate change, including loss of biodiversity, habitat destruction, and changes in vegetation. Impact on people Climate change impacts people in Zimbabwe through increased health risks, food insecurity, and displacement due to extreme weather events.Tropical storm Ana struck the country in January 2022, resulting in flash floods in eastern Zimbabwe. The storm caused significant damage to bridges, schools, and roads, affecting 812 households and 51 schools. As a result, over 3,000 households were displaced, with the Manicaland province being the hardest hit. Mashonaland Central and the Midlands experienced severe weather conditions from January 7th to January 11th 22023. There were six fatalities reported during this period. Tragically, one person lost their life due to flooding in Gwanda, Matabeleland South, while another fatality occurred in Mutasa, Manicaland Province. In Mazowe, Mashonaland Central province, strong winds caused significant damage, resulting in two casualties. Additionally, two individuals lost their lives due to lightning strikes in Chivi, Masvingo Province.According to the Meteorological Services Department of Zimbabwe, Beitbridge, Matabeleland South Province recorded 109 mm of rainfall in 24 hours up to January 7th. Similarly, Buhera, Manicaland Province experienced heavy rain, with 91 mm recorded within 24 hours up to January 11th. In late January, the capital city, Harare, and its surroundings experienced heavy rainfall. The Meteorological Services Department reported that 123 mm of rain fell in 24 hours up to January 22nd. This intense rainfall led to flooding in several parts of Harare and its neighboring areas, resulting in the displacement of numerous households. In Budiriro, a southwestern suburb of Harare, the Marimba River burst its banks, causing damage to or destruction of 43 houses. Similarly, Chitungwiza in Mashonaland West experienced floods that damaged 57 houses and destroyed two others. Economic impacts The country's economy is affected by climate change, with increased costs for disaster response and reduced agricultural productivity. Agriculture and livestock Zimbabwe's agriculture and livestock sectors face challenges from changing climate conditions, including reduced crop yields, water scarcity, and impacts on livestock production. Climate change will result in the emergence of new pests, which will have varying effects in different agricultural ecological zones (AEZs). Several climate change-related factors will contribute to increased crop loss, including reduced resistance in host plants, decreased efficacy of pesticides, and the introduction of invasive pest species. Changes in precipitation and temperature will lead to higher infestation rates of pests and more frequent disease outbreaks, consequently reducing crop and animal productivity and requiring increased expenditures on pesticides, herbicides, and veterinary drugs. A shift in pest distribution is one of the commonly observed abiotic responses to climate change. Agriculture, a critical sector of Zimbabwe's economy, is highly vulnerable to climate change. Altered climatic conditions can cause shifts in planting seasons, reduced crop yields, and water scarcity, all of which have significant effects on food security. Sources: Zimbabwean Ministry of Lands, Agriculture, Water, Climate and Rural Resettlement and Post estimates. Manufacturing sector Extreme weather events, driven by climate change, disrupt supply chains by damaging transportation infrastructure and causing delays in the delivery of raw materials and components. This can result in production slowdowns and increased costs for manufacturers.Rising temperatures have a negative impact on worker productivity in manufacturing facilities. Heat stress and discomfort can lead to decreased efficiency and potential health issues for employees, affecting overall production.Manufacturers often rely on energy-intensive processes. Climate change mitigation efforts, such as implementing carbon pricing or regulations on greenhouse gas emissions, can result in higher energy costs, which can impact the profitability of manufacturing operations. Governments worldwide are implementing stricter environmental regulations to address climate change. Manufacturers may encounter challenges and costs associated with complying with these new requirements. Health impacts Direct and indirect health impacts result from climate change, including changes in disease patterns, heat-related illnesses, and impacts on healthcare infrastructure.Rising temperatures in Zimbabwe, attributed to climate change, have led to an increased incidence of heat-related illnesses. Heat exhaustion and heatstroke are becoming more common, especially in urban areas. Climate change has expanded the range of disease vectors, such as mosquitoes, contributing to a higher prevalence of vector-borne diseases like malaria and dengue fever. These diseases pose a significant health risk. Altered precipitation patterns and reduced access to clean water sources have elevated the risk of waterborne diseases, including cholera and typhoid. Climate change-induced disruptions in agriculture and food production can result in food insecurity and malnutrition, affecting the health and nutrition of the population, particularly vulnerable groups. Mitigation and adaptations Efforts to mitigate and adapt to climate change in Zimbabwe include the promotion of climate-smart agriculture, reducing greenhouse gas emissions, and improving water resource management.Zimbabwe is actively implementing strategies to adapt to and mitigate the impacts of climate change. Most smallholder farmers rely on food aid and utilize drought-resistant crops such as sorghum. They employ various strategies to cope with climate change, such as using short-seasoned varieties, engaging in barter trade, practicing multiple cropping, diversifying their livelihoods, implementing dry planting, and adopting early planting methods in a practice called Ethno-Science Adaptive Measures. These adaptation strategies are sustainable and preferred by smallholder farmers due to their cost-effectiveness and reliance on Indigenous Knowledge Systems (IKS). See also Climate change in Africa Effects of climate change on agriculture Climate change adaptation Agriculture in Zimbabwe Geography of Zimbabwe Health in Zimbabwe Ministry of Environment, Water and Climate (Zimbabwe) == References ==
emission control area
Emission control areas (ECAs), or sulfur emission control areas (SECAs), are sea areas in which stricter controls were established to minimize airborne emissions from ships as defined by Annex VI of the 1997 MARPOL Protocol. The emissions specifically include SOx, NOx, ODSs and VOCs and the regulations came into effect in May 2005. Annex VI contains provisions for two sets of emission and fuel quality requirements regarding SOx and PM, or NOx, a global requirement and more stringent controls in special emission control areas (ECA). The regulations stems from concerns about "local and global air pollution and environmental problems" in regard to the shipping industry's contribution. In January 2020, a revised more stringent Annex VI was enforced in the emission control areas with significantly lowered emission limits.As of 2011 there were four existing ECAs: the Baltic Sea, the North Sea, the North American ECA, including most of US and Canadian coast and the US Caribbean ECA. Also other areas may be added via protocol defined in Annex VI. ECAs with nitrogen oxides thresholds are denoted as nitrogen oxide emission control areas (NECAs). Context In 1972 with the United Nations Conference on the Human Environment, widespread concerns about air pollution led to international cooperation. Air pollution from "noxious gases from ships' exhausts" was already being discussed internationally. On 2 November 1973 the International Convention for the Prevention of Pollution from Ships was adopted and later modified by the 1978 Protocol (MARPOL 73/78). MARPOL is short for Marine Pollution. In 1979, the Convention on Long-Range Transboundary Air Pollution, the "first international legally binding instrument to deal with problems of air pollution" was signed. In 1997 the regulations regarding air pollution from ships as described in Annex VI of the MARPOL Convention were adopted. These "regulations set limits on sulfur oxide (SOx) and nitrogen oxide (NOx) emissions from ship exhausts and prohibit deliberate emissions of ozone-depleting substances." The current convention is a combination of 1973 Convention and the 1978 Protocol. It entered into force on 2 October 1983. According to the IMO, a United Nations agency responsible for the "safety and security of shipping and the prevention of marine pollution by ships", as of May 2013, 152 states, representing 99.2 per cent of the world's shipping tonnage, are parties to the convention. SECAs or ECAs As of 2011 existing ECAs include the Baltic Sea (SOx, adopted 1997; enforced 2005) and the North Sea (SOx, 2005/2006 adopted July 2005; enforced 2006), the North American ECA, including most of US and Canadian coast (NOx & SOx, 2010/2012) and the US Caribbean ECA, including Puerto Rico and the US Virgin Islands (NOx & SOx, 2011/2014).The Protocol of 1997 ( MARPOL Annex VI ) included the new Annex VI of MARPOL 73/78, which went in effect on 19 May 2005. SOx emissions control The purpose of the protocol was to reduce and to control the emissions coming from the marine vessels’ exhausts that pollute the environment. MARPOL convinced IMO to control the average worldwide sulfur content fuels. As of 1 January 2020 the Annex states that a global cap is 0.5% m/m on the sulfur content in fuel. However, MARPOL insist on it being 0.1% m/m in some regions classified as "SOx emission control areas" (SECAs).On the other hand, MARPOL came up with a way to avoid using an exhaust gas cleaning systems or anything else that would limit SOx emissions. In fact, the exhaust gas cleaning systems must be approved by the State Administration before put into use. The regulations on the exhaust gas cleaning systems are to set by IMO.The monitoring of sulfur content of residual fuel supplied for use on board ships is being performed by IMO since 1999. The IMO monitors it by the bunker reports around the world. According to The Marine Environment Protection Committee (MEPC) the worldwide average sulfur content in fuel oils for 2004 was 2.67 %m/m. Nitrogen Oxide (NOx) emissions – Regulation 13 NOx control requirements apply worldwide to any installed marine diesel engine over 130 kW of output power other than the engines used solely for emergency purposes not in respect of the marine vessel’s tonnage where the engine is installed. However, there are different levels of regulations that are based on the ship’s date of construction. Those levels are broken down into 3 Tiers. Tier I applies to the ships built after 1 January 2000. It states that for engines below 130 rpm must have the total weighted cycle emission limit (g/kWh) of 17, engines that are between 130 and 1999 rpm must have no more than 12.1 (g/kWh), engines above 2000 rpm must have the limit of 9.8 (g/kWh). Tier II has the following requirements: 14.4 (g/kWh) for engines less than 130 rpm, 9.7 (g/kWh) for engines 130 – 1999 rpm, and for engines over 2000 rpm 7.7 (g/kWh) is the limit. Tier II limits apply to the ships constructed after 1 January 2011. Tier III controls only apply in the specific areas where the NOx emission are more seriously controlled (NECAs) and apply to the ships constructed after 1 January 2016. For engines under 130 rpm the limit is 3.4 (g/Kwh), engines between 130-1999 rpm the limit us 2.4 (g/kWh), engines above 2000 rpm must have the total weighted cycle emission limit of 2.0 (g/kWh). Incineration Annex VI prohibits burning certain products aboard the ship. Those products include: contaminated packaging materials and polychlorinated biphenyls, garbage, as defined by Annex V, containing more than traces of heavy metals, refined petroleum products containing halogen compounds, sewage sludge, and sludge oil. Greenhouse gas policy The Marine Environment Protection Committee (MEPC) has strongly encouraged members to use the scheme to report the greenhouse gas emissions. Those gases include carbon dioxide, methane, nitrous oxide, hydrofluorocarbons, perfluorocarbons, and sulfur hexafluoride. The purpose of making the guidelines on CO2 emissions is to develop a system that would be used by ships during a trial period. Regulations of 2013 In 2013 new regulations described in a chapter added to the MARPOL Annex VI came into effect in order to improve "energy efficiency of international shipping". The regulations apply to all marine vessels 400 gross tonnage or above. MARPOL requires the ship industry to use the EEDI mechanism that would ensure that all the required energy-efficiency levels are met. Also, all the ships are required to have a Ship Energy Efficiency Management Plan (SEEMP) on board, therefore, the seafarers always have a plan to refer to in order to maintain energy-efficiency levels required by the area the ship is at or sailing to at all times.As for the additions to the Annex VI, there were corrections towards emissions, sewage, and garbage. Prior to the regulations adjusted in 2013 the sulfur emission control areas included: the Baltic Sea, the North Sea and the North American Area (coastal areas of the United States and Canada). However, the updated in 2013 version of Annex VI included the United States Caribbean Sea (specifically areas around Puerto Rico and the United States Virgin Islands) to the list.As for the other regulations updates, there is possibility of establishing "special areas" where sewage discharge laws would be outstandingly stricter than in other areas, as well as, the few minor additions to the garbage disposal laws. Notes References Further reading "A new ECA and speed reduction limits in South Korean ports". DNV. 27 April 2020. Retrieved 11 June 2021.
canada and the kyoto protocol
Canada was active in the negotiations that led to the Kyoto Protocol in 1997. The Liberal government that signed the accord in 1997 ratified it in parliament in 2002. Canada's Kyoto target was a 6% total reduction in greenhouse gas (GHG) emissions by 2012, compared to 1990 levels of 461 megatonnes (Mt) (Government of Canada (GC) 1994). Despite signing the accord, greenhouse gas emissions increased approximately 24.1% between 1990 and 2008. In 2011, Conservative Prime Minister Stephen Harper withdrew Canada from the Kyoto Protocol. Debates surrounding the implementation of the Kyoto Protocol in Canada is influenced by the nature of relationships between national, provincial, territorial and municipal jurisdictions. The federal government can negotiate multilateral agreements and enact legislation to respect their terms. However, the provinces have jurisdiction in terms of energy and therefore, to a large extent, climate change. In 1980, when the National Energy Program was introduced, the country was almost torn apart, deeply dividing the provinces along an east–west axis. Since then, no federal government has implemented an intergovernmental, long-term, cohesive energy plan. Harper administration Some argue that when Prime Minister Stephen Harper took office in 2006, his strong opposition to the Kyoto Accord, his market-centred policies and "deliberate indifference" contributed to a dramatic rise in GHG emissions. Harper had previously denounced the Kyoto Protocol as a "socialist scheme to suck money out of wealth-producing nations" and pledged to fight against it in a 2002 fundraising letter addressed to Canadian Alliance members.Harper opposed the imposition of binding targets at the 2007 Bali Conference unless such targets were also imposed on such countries as China and India, which are exempt from GHG reduction requirements under the terms of the Kyoto Protocol. Although Canadian GHG emissions fell in 2008 and 2009 due to the global recession, Canada's emissions were expected to increase again with the economic recovery, fueled largely by the expansion of the oil sands.In 2009, Canada signed the Copenhagen Accord, which, unlike the Kyoto Accord, is a non-binding agreement. Canada agreed to reduce its GHG emissions by 17% from its 2005 levels by 2020, which translates to a reduction of 124 megatonnes (Mt).In December 2011, the Ministry of the Environment Peter Kent announced Canada's withdrawal from the Kyoto Accord one day after negotiators from nearly 200 countries meeting in Durban, South Africa, at the 2011 United Nations Climate Change Conference completed a marathon of climate talks to establish a new treaty to limit carbon emissions. The Durban talks were leading to a new binding treaty with targets for all countries to take effect in 2020. Kent argued that "The Kyoto protocol does not cover the world's largest two emitters, the United States and China, and therefore cannot work." In 2010 Canada, Japan and Russia said they would not accept new Kyoto commitments. Canada is the only country to repudiate the Kyoto Accord. Kent argued that since Canada could not meet targets, it needed to avoid the $14 billion in penalties for not achieving its goals. This decision drew a widespread international response. Finally, the cost of compliance has been estimated 20 times lower. States for which the emissions are not covered by the Kyoto Protocol (the US and China) have the largest emissions, being responsible for 41% of the Kyoto Protocol. China's emissions increased by over 200% from 1990 to 2009. Canadian Council of Chief Executives VP John Dillon argued that a further extension of Kyoto would not be effective, as many countries, not just Canada, were not on track to meet their 1997 Kyoto commitments to reduce emissions.The Bill C-38 Jobs, Growth and Long-term Prosperity Act passed in June 2012 (informally referred to as "Bill C-38"), a 2012 omnibus Bill and Budget Implementation Act, repealed the Kyoto Protocol Implementation Act.According to the report entitled "Environment: GHG Emissions Per Capita" (July 2011), Canada ranks "15th out of 17 countries for greenhouse gas (GHG) emissions per capita and earns a 'D' grade. Canada's per capita GHG emissions increased by 3.2 percent between 1990 and 2008, while total GHG emissions in Canada grew 24 percent. The largest contributor to Canada's GHG emissions is the energy sector, which includes power generation (heat and electricity), transportation, and fugitive sources." Timeline December 13, 2011: Canada became the first signatory to announce its withdrawal from the Kyoto Protocol.2009: Canada signed the Copenhagen Accord. Unlike the Kyoto Accord, this is a non-binding agreement. Canada agreed to reduce its GHG emissions by 17% from its 2005 levels by 2020 to 607 megatonnes (Mt). February 2009: The (CED) was established between Canada and the United States "to enhance joint collaboration on the development of clean energy science and technologies to reduce greenhouse gases and combat climate change".December 3–15, 2007: At the United Nations Climate Change Conference in Bali, Indonesia, Environment Minister John Baird argued that Canada would not attempt to reach its Kyoto targets because it was impossible to reach them. Baird was heavily criticized for impeding progress on 'the Bali Action Plan'.2007: The Canadian federal government introduced the Clean Air Act. January 2006: Stephen Harper's Conservative government took power. Harper abandoned Canada's Kyoto obligations in favour of his "Made in Canada" plan. In his first year, GHG emissions rose to an all-time high of 748 Mt.2004: The federal government launched the One Tonne Challenge.December 17, 2002: Canada officially ratified the Kyoto Accord under Prime Minister Jean Chrétien's Liberal government. 2001: The United States did not ratify the Kyoto Accord, leaving Canada as the only nation in the Americas with a binding emissions-reduction obligation. 2000: The federal government introduced the Action Plan 2000 on Climate Change. 1980: Prime Minister Pierre Trudeau introduced the controversial energy policy, the National Energy Program (NEP). Tim Flannery, the author of The Weathermakers, argued that since the NEP, with its tidal wave of a negative western response, which nearly tore the country apart, no federal government—Liberal or Conservative—has been brave enough to forge a new energy policy. Emission profiles and trends Canada is "one of the highest per-capita emitters in the OECD and has higher energy intensity, adjusted for purchasing power parity, than any IEA country, largely the result of its size, climate (i.e. energy demands), and resource-based economy. Conversely, the Canadian power sector is one of OECD's lowest emitting generation portfolios, producing over three-quarters of its electricity from renewable energy sources and nuclear energy combined." Canada GHG emissions increased from 1997 through 2001, dipped in 2002, increased again, then decreased in 2005. By 2007 they had reached an all-time high of 748 Mt followed by a decrease. 1990 (461 Mt) 1997 (671 Mt) 1998 (677 Mt) 2000 (716 Mt) 2001 (709 Mt) 2002 (715 Mt) 2003 (738 Mt) 2004 (742 Mt) 2005 (747 Mt); 33% higher than the Kyoto target 2006 (719 Mt) 2007 (748 Mt) 2008 (732 Mt) 2009 (690 Mt)These are the emission profiles based on the United Nations Framework Convention on Climate Change Review of Canada's Annual Report, which includes data from 1990 to 2008. Total GHG emissions amounted to 734,566.32 Gg CO2 eq Total GHG emissions increased by 24.1% between t1990 and 2008. Overview Canada's overall greenhouse gas (GHG) emissions by gas and percentage are: Carbon dioxide (CO2) (78.1%) Methane (CH4) (13.4%) Nitrous oxide (N2O) (7.1%) Hydrofluorocarbons (HFCs), perfluorocarbons (PFCs) and sulphur hexafluoride (SF6) (1.4%)Canada's overall greenhouse gas (GHG) emissions by economic sector and percentage are: Energy sector (81.3%)Transportation Stationary combustion sources Fugitive sources Agriculture sector (8.5%) Industrial processes sector (7.2%) Waste sector (2.9%) Solvent and other product use sector (0.04%) Land-use change and forestry sectorThe following table lists CO2 equivalent emissions by province and per capita for the year 2012. Emissions data for Nunavut and Northwest territories are not given separately. Energy sector Fuel combustion activities Hydrocarbon consumption Canada is the third-largest per capita greenhouse gas polluter after Australia and the United States. The main cause of these high GHG emissions is Canada's hydrocarbon consumption—at 8,300 kilograms of crude oil equivalent per person per year, the highest in the world. Fugitive emissions from fuels Fugitive emissions, such as leaks, venting and accidents, from oil and gas operations contribute 9% of energy sector emissions. Factors affecting emissions Economic factors Canada is the fifth-largest energy producer in the world, producing and export quantities of crude oil, natural gas, electricity, and coal, which creates challenges in meeting emissions standards. The energy industry generates about a quarter of Canada's export revenues and employs some 650,000 people across the country. Geographic considerations Canada's geography, with its vast distances between many communities combined with the length and coldness of Canadian winters, contributes to Canada's high hydrocarbon consumption. As temperatures drop, fuel consumption rises and fuel efficiency drops. However, this has been largely taken into account by the structure of the Kyoto Protocol, which assigns targets depending on the given country's own emissions in 1990. Since in 1990 Canada was already vast and even colder than today, its emissions were already much higher, and, consequently, Canada's 2012 Kyoto target much more forgiving, than the targets of other countries with comparable population sizes. In fact, the 1990 emission benchmark not only implicitly accounts for objective factors, like climate and distances, but also rewards wasteful lifestyle choices—the preference to live in low-density suburbs and in large, energy-inefficient, individual homes did inflate Canada's 1990 emissions, and, therefore, further increased Canada's allowable emissions under the Kyoto Protocol. Of the 162 Mt of emissions resulting from transportation sources in 2008, over half, or about 12 percent, of Canada's total emissions can be attributed to passenger cars and light trucks. Emissions from these areas made up approximately 55 percent of Canada's total transportation emissions in 2008: light trucks (29.2%), heavy-duty trucks (27%), cars (25.4%), domestic aviation (5.3%), rail (4.4%), domestic marine (3.6%), other (5.2%). Environment Canada, National GHG Inventory.Another 14 percent come from non-energy sources. The rest comes from the production and manufacture of energy and power. The following table summarizes forecast changes to annual emissions by sector in megatonnes. According to Canada's Energy Outlook, the Natural Resources Canada (NRCan) report, Canada's GHG emissions will increase by 139 million tonnes between 2004 and 2020, with more than a third of the total coming from petroleum production and refining. Upstream emissions will decline slightly, primarily from gas field depletion and from increasing production of coalbed methane, which requires less processing than conventional natural gas. Meanwhile, emissions from unconventional resources and refining will soar. References Notes International Regulatory Bodies that Influence Canada-Kyoto Further reading UNFCCC (April 21, 2011). Report of the individual review of the annual submission of Canada submitted in 2010 (PDF) (Report). United Nations Framework Convention on Climate Change. Retrieved December 19, 2011. Environment: GHG Emissions Per Capita (Report). July 2011. Retrieved December 19, 2011. See also Climate change in Canada Energy policy of Canada
gas leak
A gas leak refers to a leak of natural gas or another gaseous product from a pipeline or other containment into any area where the gas should not be present. Gas leaks can be hazardous to health as well as the environment. Even a small leak into a building or other confined space may gradually build up an explosive or lethal concentration of gas. Natural gas leaks and the escape of refrigerant gas into the atmosphere are especially harmful, because of their global warming potential and ozone depletion potential.Leaks of gases associated with industrial operations and equipment are also generally known as fugitive emissions. Natural gas leaks from fossil fuel extraction and use are known as fugitive gas emissions. Such unintended leaks should not be confused with similar intentional types of gas release, such as: gas venting emissions which are controlled releases, and often practised as a part of routine operations, or "emergency pressure releases" which are intended to prevent equipment damage and safeguard life.Gas leaks should also not be confused with "gas seepage" from the earth or oceans - either natural or due to human activity. Fire and explosion safety Pure natural gas is colorless and odorless, and is composed primarily of methane. Unpleasant scents in the form of traces of mercaptans are usually added, to assist in identifying leaks. This odor may be perceived as rotting eggs, or a faintly unpleasant skunk smell. Persons detecting the odor must evacuate the area and abstain from using open flames or operating electrical equipment, to reduce the risk of fire and explosion. As a result of the Pipeline Safety Improvement Act of 2002 passed in the United States, federal safety standards require companies providing natural gas to conduct safety inspections for gas leaks in homes and other buildings receiving natural gas. The gas company is required to inspect gas meters and inside gas piping from the point of entry into the building to the outlet side of the gas meter for gas leaks. This may require entry into private homes by the natural gas companies to check for hazardous conditions. Harm to vegetation Gas leaks can damage or kill plants. In addition to leaks from natural gas pipes, methane and other gases migrating from landfill garbage disposal sites can also cause chlorosis and necrosis in grass, weeds, or trees. In some cases, leaking gas may migrate as far as 100 feet (30 m) from the source of the leak to an affected tree. Harm to animals Methane is an asphyxiant gas which can reduce the normal oxygen concentration in breathing air. Small animals and birds are also more sensitive to toxic gas like carbon monoxide that are sometimes present with natural gas. The expression "canary in a coal mine" derives from the historical practice of using a canary as an animal sentinel to detect dangerously high concentrations of naturally occurring coal gas. Greenhouse gas emissions Methane, the primary constituent of natural gas, is up to 120 times as potent a greenhouse gas as carbon dioxide. Thus, the release of unburned natural gas produces much stronger effects than the carbon dioxide that would have been released if the gas had been burned as intended. Leak grades In the United States, most state and federal agencies have adopted the Gas Piping and Technology Committee (GPTC) standards for grading natural gas leaks. A Grade 1 leak is a leak that represents an existing or probable hazard to persons or property, and requires immediate repair or continuous action until the conditions are no longer hazardous. Examples of a Grade 1 leak are: Any leak which, in the judgment of operating personnel at the scene, is regarded as an immediate hazard. Escaping gas that has ignited. Any indication of gas which has migrated into or under a building, or into a foreign sub-structure. Any reading at the outside wall of a building, or where gas would likely migrate to an outside wall of a building. Any reading of 80% LEL, or greater, in a confined space. Any reading of 80% LEL, or greater in small substructures (other than gas associated sub structures) from which gas would likely migrate to the outside wall of a building. Any leak that can be seen, heard, or felt, and which is in a location that may endanger the general public or property.A Grade 2 leak is a leak that is recognized as being non-hazardous at the time of detection, but justifies scheduled repair based on probable future hazard. Examples of a Grade 2 Leak are: Leaks Requiring Action Ahead of Ground Freezing or Other Adverse Changes in Venting Conditions. Any leak which, under frozen or other adverse soil conditions, would likely migrate to the outside wall of a building. Leaks requiring action within six months Any reading of 40% LEL, or greater, under a sidewalk in a wall-to-wall paved area that does not qualify as a Grade 1 leak. Any reading of 100% LEL, or greater, under a street in a wall-to-wall paved area that has significant gas migration and does not qualify as a Grade 1 leak. Any reading less than 80% LEL in small substructures (other than gas associated substructures) from which gas would likely migrate creating a probable future hazard. Any reading between 20% LEL and 80% LEL in a confined space. Any reading on a pipeline operating at 30 percent specified minimum yield strength (SMYS) or greater, in a class 3 or 4 location, which does not qualify as a Grade 1 leak. Any reading of 80% LEL, or greater, in gas associated sub-structures. Any leak which, in the judgment of operating personnel at the scene, is of sufficient magnitude to justify scheduled repair.A Grade 3 leak is non-hazardous at the time of detection and can be reasonably expected to remain non-hazardous. Examples of a Grade 3 Leak are: Any reading of less than 80% LEL in small gas associated substructures. Any reading under a street in areas without wall-to-wall paving where it is unlikely the gas could migrate to the out-side wall of a building. Any reading of less than 20% LEL in a confined space. Studies In 2012, Boston University professor Nathan Phillips and his students drove along all 785 miles (1,263 km) of Boston roads with a gas sensor, identifying 3300 leaks. The Conservation Law Foundation produced a map showing around 4000 leaks reported to the Massachusetts Department of Public Utilities. In July 2014, the Environmental Defense Fund released an interactive online map based on gas sensors attached to three mapping cars which already were being driven along Boston streets to update Google Earth Street View. This survey differed from the previous studies in that an estimate of leak severity was produced, rather than just leak detection. This map should help the gas utility to prioritize leak repairs, as well as raising public awareness of the problem.In 2017, Rhode Island released an estimated 15.7 million metric tons of greenhouse gases, about a third of which comes from leaks in natural gas pipes. This figure, published in 2019, was calculated based on an assumed leakage rate of 2.7% (as that is the rate of leakage in the nearby city of Boston). The study's authors estimated that fixing the leaks would incur an annual cost of $1.6 billion to $4 billion. Regulation Massachusetts Legislation passed in 2014 requires gas suppliers to make greater efforts to control some of the 20,000 documented leaks in the US state of Massachusetts. The new law requires grade 1 and 2 leaks to be repaired if the street above a gas pipe is dug up, and requires priority be given to leaks near schools. It provides a mechanism for increased revenue from ratepayers (up to 1.5% without further approval) to cover the cost of repairs and replacement of leak-prone materials (like cast iron and non-cathodically protected steel) on an accelerated basis. The law sets a target of 20 years for replacement of pipes made from leak-prone materials if feasible given the revenue cap; as of 2015, Columbia Gas of Massachusetts (formerly named "Bay State Gas"), Berkshire Gas, Liberty Utilities, National Grid, and Unitil say they will meet this target, but NSTAR says it will take 25 years to complete. Leaks, statistics on leak-prone materials, and financial statements are reported annually to the Department of Public Utilities, which also has responsibility for rate-setting. Additional proposals not included in the law would have required grade 3 leaks to be repaired during road construction, and priority for leaks which are killing trees or which were near hospitals or churches.An attorney for the Conservation Law Foundation stated that the leaks were worth $38.8 million in lost natural gas, which also contributes 4% of the state's greenhouse gas emissions. A federal study prompted by US Senator Edward J. Markey concluded that Massachusetts consumers paid approximately $1.5 billion from 2000–2011 for gas which leaked and benefited no one. Markey has also backed legislation that would implement similar requirements at the national level, along with financing provisions for repairs. History Catastrophic gas leaks, such as the Bhopal disaster are well-recognized as problems, but the more-subtle effects of chronic low-level leaks have been slower to gain recognition. Other contexts In work with dangerous gases (such as in a lab or industrial setting), a gas leak may require hazmat emergency response, especially if the leaked material is flammable, explosive, corrosive, or toxic. See also Gas detector List of pipeline accidents in the United States Merrimack Valley gas explosions 2022 Nord Stream pipeline sabotage References External links naturalgaswatch.org (advocacy blog) City Maps of Gas Leaks reported by utilities in Massachusetts Somerville and Cambridge gas leaks surveyed by mobile detection vehicle
nuclear power debate
The nuclear power debate is a long-running controversy about the risks and benefits of using nuclear reactors to generate electricity for civilian purposes. The debate about nuclear power peaked during the 1970s and 1980s, as more and more reactors were built and came online, and "reached an intensity unprecedented in the history of technology controversies" in some countries. In the 2010s, with growing public awareness about climate change and the critical role that carbon dioxide and methane emissions plays in causing the heating of the earth's atmosphere, there was a resurgence in the intensity of the nuclear power debate. Proponents of nuclear energy argue that nuclear power is the only consistently reliable clean and sustainable energy source which provides large amounts of uninterrupted energy without polluting the atmosphere or emitting the carbon emissions that cause global warming. They argue that use of nuclear power provides well-paying jobs, energy security, reduces a dependence on imported fuels and exposure to price risks associated with resource speculation and foreign policy. Nuclear power produces virtually no air pollution, providing significant environmental benefits compared to the sizeable amount of pollution and carbon emission generated from burning fossil fuels like coal, oil and natural gas. Some proponents also believe that nuclear power is the only viable course for a country to achieve energy independence while also meeting their Nationally Determined Contributions (NDCs) to reduce carbon emissions in accordance with the Paris Agreement. They emphasize that the risks of storing waste are small and existing stockpiles can be reduced by using this waste to produce fuels for the latest technology in newer reactors. The operational safety record of nuclear power is far better than the other major kinds of power plants and, by preventing pollution, it saves lives.Opponents say that nuclear power poses numerous threats to people and the environment and point to studies that question if it will ever be a sustainable energy source. There are health risks, accidents, and environmental damage associated with uranium mining, processing and transport. They highlight the high cost and delays in the construction and maintenance of nuclear power plants, and the fears associated with nuclear weapons proliferation, nuclear power opponents fear sabotage by terrorists of nuclear plants, diversion and misuse of radioactive fuels or fuel waste, as well as naturally-occurring leakage from the unsolved and imperfect long-term storage process of radioactive nuclear waste. They also contend that reactors themselves are enormously complex machines where many things can and do go wrong, and there have been many serious nuclear accidents, although when compared to other sources of power, nuclear power is (along with solar and wind energy) among the safest. Critics do not believe that these risks can be reduced through new technology. They further argue that when all the energy-intensive stages of the nuclear fuel chain are considered, from uranium mining to nuclear decommissioning, nuclear power is not a low-carbon electricity source. History At the 1963 ground-breaking for what would become the world's largest nuclear power plant, President John F. Kennedy declared that nuclear power was a "step on the long road to peace," and that by using "science and technology to achieve significant breakthroughs" that we could "conserve the resources" to leave the world in better shape. Yet he also acknowledged that the Atomic Age was a "dreadful age" and "when we broke the atom apart, we changed the history of the world." A decade later in Germany, the construction of a nuclear power plant in Wyhl was prevented by local protestors and anti-nuclear groups. The successful use of civil disobedience to prevent the building of this plant was a key moment in the anti-nuclear power movement as it sparked the creation of other groups not only in Germany, but also around the globe. The increase in anti-nuclear power sentiment was heightened after the Three Mile Island's partial meltdown and the Chernobyl Disaster, turning public sentiment even more against nuclear-power. Pro-nuclear power groups, however, have increasingly pointed towards the potential of Nuclear energy to reduce carbon emissions, it being a safer alternative to means of production such as coal, and the overall danger associated with nuclear power to be exaggerated through the media. Electricity and energy supplied Nuclear power output globally saw slow but steady increase till 2006 when it peaked at 2'791 TWh, and then dropped with the lowest level of generation in 2012, mostly as result of Japanese reactors being offline for a full year. The output has since continued to grow from newly connected reactors, returning to pre-Fukushima levels in 2019, when IEA described nuclear power as "historically one of the largest contributors of carbon-free electricity" with 452 reactors that in total produced 2'789 TWh electricity. In the same year, United States fleet of nuclear reactors produced 800 TWh low-carbon electricity with an average capacity factor of 92%. Energy security For many countries, nuclear power affords energy independence—for example, fossil fuel crisis in 1970's was the main driver behind France's Messmer plan. Nuclear power has been relatively unaffected by embargoes, and uranium is mined in countries willing to export, including Australia and Canada. Periods of low prices of fossil fuels and renewable energy typically reduced political interest towards nuclear power, while periods of expensive fossil fuels and underachieving renewable energy increased it. Increased interest in climate change mitigation, low-carbon energy and global energy crisis resulted in what was described as another "nuclear renaissance" in the early 2020's. Sustainability Reliability The United States fleet of nuclear reactors produced 800 TWh zero-emissions electricity in 2019 with an average capacity factor of 92%.In 2010, the worldwide average capacity factor was 80.1%. In 2005, the global average capacity factor was 86.8%, the number of SCRAMs per 7,000 hours critical was 0.6, and the unplanned capacity loss factor was 1.6%. Capacity factor is the net power produced divided by the maximum amount possible running at 100% all the time, thus this includes all scheduled maintenance/refueling outages as well as unplanned losses. The 7,000 hours is roughly representative of how long any given reactor will remain critical in a year, meaning that the scram rates translates into a sudden and unplanned shutdown about 0.6 times per year for any given reactor in the world. The unplanned capacity loss factor represents amount of power not produced due to unplanned scrams and postponed restarts. Since nuclear power plants are fundamentally heat engines, waste heat disposal becomes an issue at high ambient temperature. Droughts and extended periods of high temperature can "cripple nuclear power generation, and it is often during these times when electricity demand is highest because of air-conditioning and refrigeration loads and diminished hydroelectric capacity". In such very hot weather a power reactor may have to operate at a reduced power level or even shut down. In 2009 in Germany, eight nuclear reactors had to be shut down simultaneously on hot summer days for reasons relating to the overheating of equipment or of rivers. Overheated discharge water has resulted in significant killing of fish in the past, harming livelihood and raising public concern. This issue applies equally to all thermal power plants including fossil-gas, coal, CSP and nuclear. Economics New nuclear plants The economics of new nuclear power plants is a controversial subject, since there are diverging views on this topic, and multibillion-dollar investments ride on the choice of an energy source. Nuclear power plants typically have high capital costs for building the plant, but low direct fuel costs (with much of the costs of fuel extraction, processing, use and long-term storage externalized). Therefore, comparison with other power generation methods is strongly dependent on assumptions about construction timescales and capital financing for nuclear plants. Cost estimates also need to take into account plant decommissioning and nuclear waste storage costs. On the other hand, measures to mitigate global warming, such as a carbon tax or carbon emissions trading, may favor the economics of nuclear power. In recent years there has been a slowdown of electricity demand growth and financing has become more difficult, which impairs large projects such as nuclear reactors, with very large upfront costs and long project cycles which carry a large variety of risks. In Eastern Europe, a number of long-established projects are struggling to find finance, notably Belene in Bulgaria and the additional reactors at Cernavoda in Romania, and some potential backers have pulled out. The reliable availability of cheap gas poses a major economic disincentive for nuclear projects.Analysis of the economics of nuclear power must take into account who bears the risks of future uncertainties. To date all operating nuclear power plants were developed by state-owned or regulated utility monopolies where many of the risks associated with construction costs, operating performance, fuel price, and other factors were borne by consumers rather than suppliers. Many countries have now liberalized the electricity market where these risks, and the risk of cheaper competitors emerging before capital costs are recovered, are borne by plant suppliers and operators rather than consumers, which leads to a significantly different evaluation of the economics of new nuclear power plants.Following the 2011 Fukushima Daiichi nuclear disaster, costs are likely to go up for currently operating and new nuclear power plants, due to increased requirements for on-site spent fuel management and elevated design basis threats.New nuclear power plants require significant upfront investment which was so far mostly caused by highly customized designs of large plants but can be driven down by standardized, reusable designs (as did South Korea). While new nuclear power plants are more expensive than new renewable energy in upfront investment, the cost of the latter is expected to grow as the grid is saturated with intermittent sources and energy storage as well as land usage becomes a primary barrier to their expansion. A fleet of Small Modular Reactors can be also significantly cheaper than an equivalent single conventional size reactor due to standardized design and much smaller complexity.In 2020 International Energy Agency called for creation of a global nuclear power licensing framework as in the existing legal situation each plant design needs to be licensed separately in each country. Cost of decommissioning nuclear plants The price of energy inputs and the environmental costs of every nuclear power plant continue long after the facility has finished generating its last useful electricity. Both nuclear reactors and uranium enrichment facilities must be decommissioned, returning the facility and its parts to a safe enough level to be entrusted for other uses. After a cooling-off period that may last as long as a century, reactors must be dismantled and cut into small pieces to be packed in containers for final disposal. The process is very expensive, time-consuming, potentially hazardous to the natural environment, and presents new opportunities for human error, accidents or sabotage. However, despite these risks, according to the World Nuclear Association, "In over 50 years of civil nuclear power experience, the management and disposal of civil nuclear waste has not caused any serious health or environmental problems, nor posed any real risk to the general public."The total energy required for decommissioning can be as much as 50% more than the energy needed for the original construction. In most cases, the decommissioning process costs between US$300 million to US$5.6 billion. Decommissioning at nuclear sites which have experienced a serious accident are the most expensive and time-consuming. In the U.S. there are 13 reactors that have permanently shut down and are in some phase of decommissioning, and none of them have completed the process.Current UK plants are expected to exceed £73 billion in decommissioning costs. Subsidies Critics of nuclear power claim that it is the beneficiary of inappropriately large economic subsidies, taking the form of research and development, financing support for building new reactors and decommissioning old reactors and waste, and that these subsidies are often overlooked when comparing the economics of nuclear against other forms of power generation.Nuclear power proponents argue that competing energy sources also receive subsidies. Fossil fuels receive large direct and indirect subsidies, such as tax benefits and not having to pay for the greenhouse gases they emit, such as through a carbon tax. Renewable energy sources receive proportionately large direct production subsidies and tax breaks in many nations, although in absolute terms they are often less than subsidies received by non-renewable energy sources.In Europe, the FP7 research program has more subsidies for nuclear power than for renewable and energy efficiency together; over 70% of this is directed at the ITER fusion project. In the US, public research money for nuclear fission declined from 2,179 to 35 million dollars between 1980 and 2000.A 2010 report by Global Subsidies Initiative compared relative subsidies of most common energy sources. It found that nuclear energy receives 1.7 US cents per kilowatt hour (kWh) of energy it produces, compared to fossil fuels receiving 0.8 US cents per kWh, renewable energy receiving 5.0 US cents per kWh and biofuels receiving 5.1 US cents per kWh.Carbon taxation is a significant positive driver in the economy of both nuclear plants and renewable energy sources, all of which are low emissions in their life-cycle greenhouse-gas emissions.In 2019 a heated debate happened in the European Union on creation of a "green finance taxonomy" list intended to create investment opportunities for zero-emission energy technologies. Initially the basic criterion for inclusion was life-cycle emissions at 100 gCO2eq/kWh or less which would include nuclear power which falls well under this threshold (12). Under lobbying from European Greens and Germany an additional "do no harm" criterion was introduced specifically to exclude nuclear power which in their intention should exclude nuclear power from the list.In July 2020 W. Gyude Moore, former Liberia's Minister for Public Works, called international bodies to start (or restart) funding for nuclear projects in Africa, following the example of US Development Finance Corporation. Moore accused high-income countries like Germany and Australia of "hypocrisy" and "pulling up the ladder behind them", as they have built their strong economy over decades of cheap fossil or nuclear power, and now are effectively preventing African countries from using the only low-carbon and non-intermittent alternative, the nuclear power.Also in July 2020 Hungary declared its nuclear power will be used as low-emission source of energy to produce hydrogen, while Czechia began the process of approval of public loan to CEZ nuclear power station. Indirect nuclear insurance subsidy Kristin Shrader-Frechette has said "if reactors were safe, nuclear industries would not demand government-guaranteed, accident-liability protection, as a condition for their generating electricity". No private insurance company or even consortium of insurance companies "would shoulder the fearsome liabilities arising from severe nuclear accidents".The potential costs resulting from a nuclear accident (including one caused by a terrorist attack or a natural disaster) are great. The liability of owners of nuclear power plants in the U.S. is currently limited under the Price-Anderson Act (PAA). The Price-Anderson Act, introduced in 1957, was "an implicit admission that nuclear power provided risks that producers were unwilling to assume without federal backing". The Price-Anderson Act "shields nuclear utilities, vendors and suppliers against liability claims in the event of a catastrophic accident by imposing an upper limit on private sector liability". Without such protection, private companies were unwilling to be involved. No other technology in the history of American industry has enjoyed such continuing blanket protection.The PAA was due to expire in 2002, and the former U.S. vice-president Dick Cheney said in 2001 that "nobody's going to invest in nuclear power plants" if the PAA is not renewed.In 1983, U.S. Nuclear Regulatory Commission (USNRC) concluded that the liability limits placed on nuclear insurance were significant enough to constitute a subsidy, but did not attempt to quantify the value of such a subsidy at that time. Shortly after this in 1990, Dubin and Rothwell were the first to estimate the value to the U.S. nuclear industry of the limitation on liability for nuclear power plants under the Price Anderson Act. Their underlying method was to extrapolate the premiums operators currently pay versus the full liability they would have to pay for full insurance in the absence of the PAA limits. The size of the estimated subsidy per reactor per year was $60 million prior to the 1982 amendments, and up to $22 million following the 1988 amendments. In a separate article in 2003, Anthony Heyes updates the 1988 estimate of $22 million per year to $33 million (2001 dollars).In case of a nuclear accident, should claims exceed this primary liability, the PAA requires all licensees to additionally provide a maximum of $95.8 million into the accident pool—totaling roughly $10 billion if all reactors were required to pay the maximum. This is still not sufficient in the case of a serious accident, as the cost of damages could exceed $10 billion. According to the PAA, should the costs of accident damages exceed the $10 billion pool, the process for covering the remainder of the costs would be defined by Congress. In 1982, a Sandia National Laboratories study concluded that depending on the reactor size and 'unfavorable conditions' a serious nuclear accident could lead to property damages as high as $314 billion while fatalities could reach 50,000. Environmental effects Nuclear generation does not directly produce sulfur dioxide, nitrogen oxides, mercury or other pollutants associated with the combustion of fossil fuels. Nuclear power has also very high surface power density, which means much less space is used to produce the same amount of energy (thousands times less when compared to wind or solar power).The primary environmental effects of nuclear power come from uranium mining, radioactive effluent emissions, and waste heat. Nuclear industry, including all past nuclear weapon testing and nuclear accidents, contributes less than 1% of the overall background radiation globally. A 2014 multi-criterion analysis of impact factors critical for biodiversity, economic and environmental sustainability indicated that nuclear and wind power have the best benefit-to-cost ratios and called environmental movements to reconsider their position on nuclear power and evidence-based policy making. In 2013 an open-letter with the same message signed by climate scientists Ken Caldeira, Kerry Emanuel, James Hansen, Tom Wigley and then co-signed by many others.Resources usage in uranium mining is 840 m3 of water (up to 90% of the water is recycled) and 30 tonnes of CO2 per tonne of uranium mined. Energy return on investment (EROEI) for a PWR nuclear power plant ranges from 75 to 100 meaning total energy invested in the power plant is returned in 2 months. Median life-cycle greenhouse-gas emissions of nuclear power plant are 12 gCO2eq/kWh. Both indicators are one of the most competitive of all available energy sources. The Intergovernmental Panel on Climate Change (IPCC) recognizes nuclear as one of the lowest lifecycle emissions energy sources available, lower than solar, and only bested by wind. The US National Renewable Energy Lab (NREL) also cites nuclear as a very low lifecycle emissions source. In terms of life-cycle surface power density (land surface area used per power output), nuclear power has median density of 240 W/m2, which is 34x more than solar power (6.63 W/m2) and 130x more than wind power (1.84 W/m2) meaning than when the same power output is to be provided by nuclear or renewable sources, the latter are going to use tens to hundreds times more land surface for the same amount of power produced. Greenpeace and some other environmental organizations have been criticized for distributing claims about CO2 emissions from nuclear power that are unsupported by the scientific data. Their influence has been attributed to "shocking" results of 2020 poll in France, where 69% of the respondents believed that nuclear power contributes to climate change. Greenpeace Australia for example claimed that "there’s no significant savings on carbon output" in nuclear power, which directly contradicts the IPCC life-cycle analysis. In 2018 Greenpeace Spain ignored conclusions from a report by University of Comillas report it procured, showing the lowest CO2 emissions in scenarios involving nuclear power, and instead supported an alternative scenario involving fossil fuels, with much higher emissions.Life-cycle land usage by nuclear power (including mining and waste storage, direct and indirect) is 100 m2/GWh which is 1⁄2 of solar power and 1/10 of wind power. Land surface usage is the main reason for opposition against on-shore wind farms.In June 2020 Zion Lights, spokesperson of Extinction Rebellion UK declared her support for nuclear energy as critical part of the energy mix along with renewable energy sources and called fellow environmentalists to accept that nuclear power is part of the "scientifically assessed solutions for addressing climate change".In July 2020 Good Energy Collective, the first women-only pressure group advocating nuclear power as part of the climate change mitigation solutions was formed in the US. In March 2021, 46 environmental organizations from European Union wrote an open letter to the President of the European Commission calling to increase share of nuclear power as the most effective way of reducing EU's reliance on fossil fuels. The letter also condemned "multi-facetted misrepresentation" and "rigged information about nuclear, with opinion driven by fear" which results in shutting down of stable, low-carbon nuclear power plants.A 2023 study calculated land surface usage of nuclear power at 0.15 km2/TWh, the lowest of all energy sources.In May 2023, the Washington Post wrote, "Had Germany kept its nuclear plants running from 2010, it could have slashed its use of coal for electricity to 13 percent by now. Today’s figure is 31 percent... Already more lives might have been lost just in Germany because of air pollution from coal power than from all of the world’s nuclear accidents to date, Fukushima and Chernobyl included." EU Taxonomy A comprehensive debate on the role of nuclear power continued since 2020 as part of regulatory work on European Union Taxonomy of environmentally sustainable technologies. Low carbon intensity of nuclear power was not disputed, but opponents raised nuclear waste and thermal pollution as not sustainable element that should exclude it from the sustainable taxonomy. Detailed technical analysis was delegated to the European Commission Joint Research Centre (JRC) which looked at all potential issues of nuclear power from scientific, engineering and regulatory point of view and in March 2021 published a 387-page report which concluded: The analyses did not reveal any science-based evidence that nuclear energy does more harm to human health or to the environment than other electricity production technologies already included in the Taxonomy as activities supporting climate change mitigation.The EU tasked two further expert commissions to validate JRC findings—the Euratom Article 31 expert group on radiation protection and SCHEER (Scientific Committee on Health, Environmental and Emerging Risks). Both groups published their reports in July 2021, largely confirming JRC conclusions, with a number of topics that require further investigation.The SCHEER is of the opinion that the findings and recommendations of the report with respect of the non-radiological impacts are in the main comprehensive. (...) The SCHEER broadly agrees with these statements, however, the SCHEER is of the view that dependence on an operational regulatory framework is not in itself sufficient to mitigate these impacts, e.g. in mining and milling where the burden of the impacts are felt outside Europe. SCHEER also pointed out that JRC conclusion that nuclear power "does less harm" as the other (e.g. renewable) technologies against which it was compared is not entirely equivalent to the "do no significant harm" criterion postulated by the taxonomy. The JRC analysis of thermal pollution doesn't fully take into account limited water mixing in shallow waters.The Article 31 group confirmed JRC findings: The conclusions of the JRC report are based on well-established results of scientific research, reviewed in detail by internationally recognised organisations and committees.Also in July 2021 a group of 87 members of European Parliament signed an open letter calling European Commission to include nuclear power in the sustainable taxonomy following favourable scientific reports, and warned against anti-nuclear coalition that "ignore scientific conclusions and actively oppose nuclear power".In February 2022 European Commission published the Complementary Climate Delegated Act to the taxonomy, that set specific criteria under which nuclear power may be included in sustainable energy funding schemes. Inclusion of nuclear power and fossil gas in the taxonomy was justified by scientific reports mentioned above and based primarily on very large potential of nuclear power to decarbonize electricity production. For nuclear power, the Taxonomy covers research and development of new Generation IV reactors, new nuclear power plants built with Generation III reactors and life-time extension of existing nuclear power plants. All projects must satisfy requirements as to the safety, thermal pollution and waste management. Effect on greenhouse gas emissions An average nuclear power plant prevents emission of 2,000,000 metric tons of CO2, 5,200 metric tons of SO2 and 2,200 metric tons of NOx in a year as compared to an average fossil fuel plant.While nuclear power does not directly emit greenhouse gases, emissions occur, as with every source of energy, over a facility's life cycle: mining and fabrication of construction materials, plant construction, operation, uranium mining and milling, and plant decommissioning. The Intergovernmental Panel on Climate Change found a median value of 12 g (0.42 oz) equivalent lifecycle carbon dioxide emissions per kilowatt hour (kWh) for nuclear power, being one of the lowest among all energy sources and comparable only with wind power. Data from the International Atomic Energy Agency showed a similar result, with nuclear energy having the lowest emissions of any energy source when accounting for both direct and indirect emissions from the entire energy chain. Climate and energy scientists James Hansen, Ken Caldeira, Kerry Emanuel and Tom Wigley have released an open letter stating, in part, that Renewables like wind and solar and biomass will certainly play roles in a future energy economy, but those energy sources cannot scale up fast enough to deliver cheap and reliable power at the scale the global economy requires. While it may be theoretically possible to stabilize the climate without nuclear power, in the real world there is no credible path to climate stabilization that does not include a substantial role for nuclear power. The statement was widely discussed in the scientific community, with voices both against and in favor. It has been also recognized that the life-cycle CO2 emissions of nuclear power will eventually increase once high-grade uranium ore is used up and lower-grade uranium needs to be mined and milled using fossil fuels, although there is controversy over when this might occur.As the nuclear power debate continues, greenhouse gas emissions are increasing. Predictions estimate that even with draconian emission reductions within the ten years, the world will still pass 650 ppm of carbon dioxide and a catastrophic 4 °C (7.2 °F) average rise in temperature. Public perception is that renewable energies such as wind, solar, biomass and geothermal are significantly affecting global warming. All of these sources combined only supplied 1.3% of global energy in 2013 as 8 billion tonnes (1.8×1013 lb) of coal was burned annually. This "too little, too late" effort may be a mass form of climate change denial, or an idealistic pursuit of green energy. In 2015 an open letter from 65 leading biologists worldwide described nuclear power as one of the energy sources that are the most friendly to biodiversity due to its high energy density and low environmental footprint: Much as leading climate scientists have recently advocated the development of safe, next-generation nuclear energy systems to combat climate change, we entreat the conservation and environmental community to weigh up the pros and cons of different energy sources using objective evidence and pragmatic trade-offs, rather than simply relying on idealistic perceptions of what is 'green'. In response to 2016 Paris Agreement a number of countries explicitly listed nuclear power as part of their commitment to reduce greenhouse gas emissions. In June 2019, an open letter to "the leadership and people of Germany", written by almost 100 Polish environmentalists and scientist, urged Germany to "reconsider the decision on the final decommissioning of fully functional nuclear power plants" for the benefit of the fight against global warming.In 2020 a group of European scientists published an open letter to the European Commission calling for inclusion of nuclear power as "element of stability in carbon-free Europe". Also in 2020 a coalition of 30 European nuclear industry companies and research bodies published an open letter highlighting that nuclear power remains the largest single source of zero-emissions energy in European Union.In 2021 prime ministers of Hungary, France, Czech Republic, Romania, Slovak Republic, Poland and Slovenia, signed an open letter to European Commission calling for recognition of important role of nuclear power as the only non-intermittent low-carbon energy source currently available at industrial scale in Europe.In 2021 UNECE described suggested pathways of building sustainable energy supply with increased role of low-carbon nuclear power. In April 2021 US President's Joe Biden Infrastructure Plan called for 100% of US electricity being generated from low-carbon sources of which nuclear power would be a significant component.IEA "Net Zero by 2050" pathways published in 2021 assume growth of nuclear power capacity by 104% accompanied by 714% growth of renewable energy sources, mostly solar power. In June 2021 over 100 organisations published a position paper for the COP26 climate conference highlighting the fact that nuclear power is low-carbon dispatchable energy source that has been the most successful in reducing CO2 emissions from the energy sector.In August 2021 United Nations Economic Commission for Europe (UNECE) described nuclear power as important tool to mitigate climate change that has prevented 74 Gt of CO2 emissions over the last half century, that provides 20% of energy in Europe and 43% of low-carbon energy.Faced with increasing fossil gas prices and reopening of new coal and gas power plants, a number of European leaders questioned the anti-nuclear policies of Belgium and Germany. European Commissioner for the Internal Market Thierry Breton described shutting down of operational nuclear power plants as depriving Europe of low-carbon energy capacity. Organizations such as Climate Bonds Initiative, Stand Up for Nuclear, Nuklearia and Mothers for Nuclear Germany-Austria-Switzerland are organizing periodic events in defense of the plants due to be closed. High-level radioactive waste The world's nuclear fleet creates about 10,000 metric tons (22,000,000 pounds) of high-level spent nuclear fuel each year. High-level radioactive waste management concerns management and disposal of highly radioactive materials created during production of nuclear power. This requires the use of "geological disposal", or burial, due to the extremely long periods of time that radioactive waste remain deadly to living organisms. Of particular concern are two long-lived fission products, technetium-99 (half-life 220,000 years) and iodine-129 (half-life 15.7 million years), which dominate spent nuclear fuel radioactivity after a few thousand years. The most troublesome transuranic elements in spent fuel are neptunium-237 (half-life two million years) and plutonium-239 (half-life 24,000 years). However, many nuclear power by-products are usable as nuclear fuel themselves; extracting the usable energy producing contents from nuclear waste is called "nuclear recycling". About 80% of the byproducts can be reprocessed and recycled back into nuclear fuel, negating this effect. The remaining high-level radioactive waste requires sophisticated treatment and management to successfully isolate it from the biosphere. This usually necessitates treatment, followed by a long-term management strategy involving permanent storage, disposal or transformation of the waste into a non-toxic form.About 95% of nuclear waste by volume is classified as very low-level waste (VLLW) or low-level waste (LLW), with 4% being intermediate-level waste (ILW) and less than 1% being high-level waste (HLW). From 1954 (the start of nuclear energy production) until the end of 2016, about 390,000 tons of spent fuel were generated worldwide. About one-third of this had been reprocessed, with the remainder being in storage.Governments around the world are considering a range of waste management and disposal options, usually involving deep-geologic placement, although there has been limited progress toward implementing long-term waste management solutions. This is partly because the timeframes in question when dealing with radioactive waste range from 10,000 to millions of years, according to studies based on the effect of estimated radiation doses. Since the fraction of a radioisotope's atoms decaying per unit of time is inversely proportional to its half-life, the relative radioactivity of a quantity of buried human radioactive waste would diminish over time compared to natural radioisotopes (such as the decay chain of 120 trillion tons of thorium and 40 trillion tons of uranium which are at relatively trace concentrations of parts per million each over the crust's 3×1019 ton mass).For instance, over a timeframe of thousands of years, after the most active short half-life radioisotopes decayed, burying U.S. nuclear waste would increase the radioactivity in the top 2,000 feet (610 m) of rock and soil in the United States (100 million km2 or 39 million sq mi) by approximately 0.1 parts per million over the cumulative amount of natural radioisotopes in such a volume, although the vicinity of the site would have a far higher concentration of artificial radioisotopes underground than such an average.[broken link]Nuclear waste disposal is one of the most controversial facets of the nuclear power debate. Presently, waste is mainly stored at individual reactor sites and there are over 430 locations around the world where radioactive material continues to accumulate. Experts agree that centralized underground repositories which are well-managed, guarded, and monitored, would be a vast improvement. There is an international consensus on the advisability of storing nuclear waste in deep underground repositories, but no country in the world has yet opened such a site as of 2009. There are dedicated waste storage sites at the Waste Isolation Pilot Plant in New Mexico and two in German salt mines, the Morsleben Repository and the Schacht Asse II. Public debate on the subject frequently focuses of nuclear waste only, ignoring the fact that existing deep geologic repositories globally (including Canada and Germany) already exist and store highly toxic waste such as arsenic, mercury and cyanide, which, unlike nuclear waste, does not lose toxicity over time. Numerous media reports about alleged "radioactive leaks" from nuclear storage sites in Germany also confused waste from nuclear plants with low-level medical waste (such as irradiated X-ray plates and devices).European Commission Joint Research Centre report of 2021 (see above) concluded: Management of radioactive waste and its safe and secure disposal is a necessary step in the lifecycle of all applications of nuclear science and technology (nuclear energy, research, industry, education, medical, and other). Radioactive waste is therefore generated in practically every country, the largest contribution coming from the nuclear energy lifecycle in countries operating nuclear power plants. Presently, there is broad scientific and technical consensus that disposal of high-level, long-lived radioactive waste in deep geologic formations is, at the state of today’s knowledge, considered as an appropriate and safe means of isolating it from the biosphere for very long time scales. Prevented mortality In March 2013, climate scientists Pushker Kharecha and James Hansen published a paper in Environmental Science & Technology, entitled Prevented mortality and greenhouse gas emissions from historical and projected nuclear power. It estimated an average of 1.8 million lives saved worldwide by the use of nuclear power instead of fossil fuels between 1971 and 2009. The paper examined mortality levels per unit of electrical energy produced from fossil fuels (coal and natural gas) as well as nuclear power. Kharecha and Hansen assert that their results are probably conservative, as they analyze only deaths and do not include a range of serious but non-fatal respiratory illnesses, cancers, hereditary effects and heart problems, nor do they include the fact that fossil fuel combustion in developing countries tends to have a higher carbon and air pollution footprint than in developed countries. The authors also conclude that the emission of some 64 billion tonnes (7.1×1010 tons) of carbon dioxide equivalent have been avoided by nuclear power between 1971 and 2009, and that between 2010 and 2050, nuclear power could additionally avoid up to 80–240 billion tonnes (8.8×1010–2.65×1011 tons). A 2020 study on Energiewende found that if Germany had postponed the nuclear phase out and phased out coal first it could have saved 1,100 lives and $12 billion in social costs per year.In 2020 the Vatican has praised "peaceful nuclear technologies" as significant factor to "alleviation of poverty and the ability of countries to meet their development goals in a sustainable way". Accidents and safety In comparison to other sources of power, nuclear power is (along with solar and wind energy) among the safest, accounting for all the risks from mining to production to storage, including the risks of spectacular nuclear accidents. Sources of health effects from nuclear power include occupational exposure (mostly during mining), routine exposure from power generation, decommissioning, reprocessing, waste disposal, and accidents. The number of deaths caused by these effects is extremely small.Accidents in the nuclear industry have been less damaging than accidents in the hydroelectric power industry, and less damaging than the constant, incessant damage from air pollutants from fossil fuels. For instance, by running a 1000-MWe nuclear power plant including uranium mining, reactor operation and waste disposal, the radiation dose is 136 person-rem/year, while the dose is 490 person-rem/year for an equivalent coal-fired power plant. The World Nuclear Association provides a comparison of deaths from accidents in course of different forms of energy production. In their comparison, deaths per TW-yr of electricity produced from 1970 to 1992 are quoted as 885 for hydropower, 342 for coal, 85 for natural gas, and 8 for nuclear. Nuclear power plant accidents rank first in terms of their economic cost, accounting for 41 percent of all property damage attributed to energy accidents as of 2008.EU JRC study in 2021 compared actual and potential fatality rates for different energy generation technologies based on The Energy-Related Severe Accident Database (ENSAD). Due to the fact that actual nuclear accidents were very few as compared to technologies such as coal or fossil gas, there was an additional modelling applied using Probabilistic Safety Assessment (PSA) methodology to estimate and quantify the risk of hypothetical severe nuclear accidents in future. The analysis looked at Generation II reactors (PWR) and Generation III (EPR) reactors, and estimated two metrics—fatality rate per GWh (reflecting casualties related to normal operations), and a maximum credible number of casualties in a single hypothetical accident, reflecting general risk aversion. In respect to the fatality rate per GWh in Generation II reactors it made the following conclusion: With regard to the first metric, fatality rates, the results indicate that current Generation II nuclear power plants have a very low fatality rate compared to all forms of fossil fuel energies and comparable with hydropower in OECD countries and wind power. Only Solar energy has significantly lower fatality rates. (...) Operating nuclear power plants are subject to continuous improvement. As a result of lessons learned from operating experience, the development of scientific knowledge, or as safety standards are updated, reasonably practicable safety improvements are implemented at existing nuclear power plants. In respect to fatality rate per GWh Generation III (EPR) reactors: Generation III nuclear power plants are designed fully in accordance with the latest international safety standards that have been continually updated to take account of advancement in knowledge and of the lessons learned from operating experience, including major events like the accidents at Three Mile Island, Chernobyl and Fukushima. The latest standards include extended requirements related to severe accident prevention and mitigation. The range of postulated initiating events taken into account in the design of the plant has been expanded to include, in a systematic way, multiple equipment failures and other very unlikely events, resulting in a very high level of prevention of accidents leading to melting of the fuel. Despite the high level of prevention of core melt accidents, the design must be such as to ensure the capability to mitigate the consequences of severe degradation of the reactor core. For this, it is necessary to postulate a representative set of core melt accident sequences that will be used to design mitigating features to be implemented in theplant design to ensure the protection of the containment function and avoid large or early radioactive releases into the environment. According to WENRA [3.5-3], the objective is to ensure that even in the worst case, the impact of any radioactive releases to the environment would be limited to within a few km of the site boundary. These latest requirements are reflected in the very low fatality rate for the Generation III European Pressurised-water Reactor (EPR) given in figure 3.5-1. The fatality rate associated with future nuclear energy are the lowest of all the technologies. The second estimate, the maximum casualties in the worst-case scenario, is much higher, and likelihood of such accident is estimated at 10−10 per reactor year, or once in a ten billion years: The maximum credible number of fatalities from a hypothetical nuclear accident at a Generation III NPP calculated by Hirschberg et al [3.5-1] is comparable with the corresponding number for hydroelectricity generation, which is in the region of 10,000 fatalities due to hypothetical dam failure. In this case, the fatalities are all or mostly immediate fatalities and are calculated to have a higher frequency of occurrence. The JRC report notes that "such a number of fatalities, even if based on very pessimistic assumptions, has an impact on public perception due to disaster (or risk) aversion", explaining that general public attributes higher apparent importance to low-frequency events with higher number of casualties, while even much higher numbers of casualties but evenly spread over time are not perceived as equally important. In comparison, in the EU over 400'000 premature deaths per year are attributed to air pollution, and 480'000 premature deaths per year for smokers and 40'000 of non-smokers per year as result of tobacco in the US.Benjamin K. Sovacool has reported that worldwide there have been 99 accidents at nuclear power plants. Fifty-seven accidents have occurred since the Chernobyl disaster, and 57% (56 out of 99) of all nuclear-related accidents have occurred in the US. Serious nuclear power plant accidents include the Fukushima Daiichi nuclear disaster (2011), Chernobyl disaster (1986), Three Mile Island accident (1979), and the SL-1 accident (1961). Nuclear-powered submarine mishaps include the USS Thresher accident (1963), the K-19 reactor accident (1961), the K-27 reactor accident (1968), and the K-431 reactor accident (1985). The effect of nuclear accidents has been a topic of debate practically since the first nuclear reactors were constructed. It has also been a key factor in public concern about nuclear facilities. Some technical measures to reduce the risk of accidents or to minimize the amount of radioactivity released to the environment have been adopted. As such, deaths caused by these accidents are minimal, to the point at which the Fukushima evacuation efforts caused an estimated 32 times the number of deaths caused by the accident itself, with 1,000 to 1,600 deaths from the evacuation, and 40 to 50 deaths coming from the accident itself. Despite the use of such safety measures, "there have been many accidents with varying effects as well near misses and incidents".Nuclear power plants are a complex energy system and opponents of nuclear power have criticized the sophistication and complexity of the technology. Helen Caldicott has said: "... in essence, a nuclear reactor is just a very sophisticated and dangerous way to boil water—analogous to cutting a pound of butter with a chain saw." The 1979 Three Mile Island accident inspired Charles Perrow's book Normal Accidents, where a nuclear accident occurs, resulting from an unanticipated interaction of multiple failures in a complex system. TMI was an example of a normal accident because it was deemed "unexpected, incomprehensible, uncontrollable and unavoidable". Perrow concluded that the failure at Three Mile Island was a consequence of the system's immense complexity. Such modern high-risk systems, he realized, were prone to failures however well they were managed. It was inevitable that they would eventually suffer what he termed a 'normal accident'. Therefore, he suggested, we might do better to contemplate a radical redesign, or if that was not possible, to abandon such technology entirely. These concerns have been addressed by modern passive safety systems, which require no human intervention to function. Most aspects of safety at nuclear plants have been improving since 1990. Newer reactor designs are safer than older ones, and older reactors still in operation have also improved due to improved safety procedures.Catastrophic scenarios involving terrorist attacks are also conceivable. An interdisciplinary team from the Massachusetts Institute of Technology (MIT) has estimated that given a three-fold increase in nuclear power from 2005 to 2055, and an unchanged accident frequency, four core damage accidents would be expected in that period.In 2020 a Parliamentary inquiry in Australia found nuclear power to be one of the safest and cleanest among 140 specific technologies analyzed based on data provided by MIT.European Commission Joint Research Centre report of 2021 (see above) concluded: Severe accidents with core melt did happen in nuclear power plants and the public is well aware of the consequences of the three major accidents, namely Three Mile Island (1979, US), Chernobyl (1986, Soviet Union) and Fukushima (2011, Japan). The NPPs involved in these accidents were of various types (PWR, RBMK and BWR) and the circumstances leading to these events were also very different. Severe accidents are events with extremely low probability but with potentially serious consequences and they cannot be ruled out with 100% certainty. After the Chernobyl accident, international and national efforts focused on developing Gen III nuclear power plants designed according to enhanced requirements related to severe accident prevention and mitigation. The deployment of various Gen III plant designs started in the last 15 years worldwide and now practically only Gen III reactors are constructed and commissioned. These latest technology 10-10 fatalities/GWh, see Figure 3.5-1 (of Part A). The fatality rates characterizing state-of-the art Gen III NPPs are the lowest of all the electricity generation technologies. Chernobyl steam explosion The Chernobyl steam explosion was a nuclear accident that occurred on 26 April 1986 at the Chernobyl Nuclear Power Plant in Ukraine. A steam explosion and graphite fire released large quantities of radioactive contamination into the atmosphere, which spread over much of Western USSR and Europe. It is considered the worst nuclear power plant accident in history, and is one of only two classified as a level 7 event on the International Nuclear Event Scale (the other being the Fukushima Daiichi nuclear disaster). The battle to contain the contamination and avert a greater catastrophe ultimately involved over 500,000 workers and cost an estimated 18 billion rubles, crippling the Soviet economy. The accident raised concerns about the safety of the nuclear power industry, slowing its expansion for a number of years.Despite the fact the Chernobyl disaster became a nuclear power safety debate icon, there were other nuclear accidents in USSR at the Mayak nuclear weapons production plant (nearby Chelyabinsk, Russia) and total radioactive emissions in Chelyabinsk accidents of 1949, 1957 and 1967 together were significantly higher than in Chernobyl. However, the region near Chelyabinsk was and is much more sparsely populated than the region around Chernobyl. The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) has conducted 20 years of detailed scientific and epidemiological research on the effects of the Chernobyl accident. Apart from the 57 direct deaths in the accident itself, UNSCEAR predicted in 2005 that up to 4,000 additional cancer deaths related to the accident would appear "among the 600 000 persons receiving more significant exposures (liquidators working in 1986–87, evacuees, and residents of the most contaminated areas)". According to BBC, "It is conclusive that around 5,000 cases of thyroid cancer—most of which were treated and cured—were caused by the contamination. Many suspect that the radiation has caused or will cause other cancers, but the evidence is patchy. Amid reports of other health problems—including birth defects—it still is not clear if any can be attributed to radiation". Russia, Ukraine, and Belarus have been burdened with the continuing and substantial decontamination and health care costs of the Chernobyl disaster. Fukushima disaster Following an earthquake, tsunami, and failure of cooling systems at Fukushima I Nuclear Power Plant and issues concerning other nuclear facilities in Japan on 11 March 2011, a nuclear emergency was declared. This was the first time a nuclear emergency had been declared in Japan, and 140,000 residents within 20 km (12 mi) of the plant were evacuated. Explosions and a fire resulted in increased levels of radiation, sparking a stock market collapse and panic-buying in supermarkets. The UK, France and some other countries advised their nationals to consider leaving Tokyo, in response to fears of spreading nuclear contamination. The accidents drew attention to ongoing concerns over Japanese nuclear seismic design standards and caused other governments to re-evaluate their nuclear programs. John Price, a former member of the Safety Policy Unit at the UK's National Nuclear Corporation, said that it "might be 100 years before melting fuel rods can be safely removed from Japan's Fukushima nuclear plant". Three Mile Island accident The Three Mile Island accident was a core meltdown in Unit 2 (a pressurized water reactor manufactured by Babcock & Wilcox) of the Three Mile Island Nuclear Generating Station in Dauphin County, Pennsylvania near Harrisburg, United States in 1979. It was the most significant accident in the history of the US commercial nuclear power generating industry, resulting in the release of approximately 2.5 million curies of radioactive noble gases, and approximately 15 curies of iodine-131. Cleanup started in August 1979 and officially ended in December 1993, with a total cleanup cost of about $1 billion. The incident was rated a five on the seven-point International Nuclear Event Scale: Accident With Wider Consequences.The health effects of the Three Mile Island nuclear accident are widely, but not universally, agreed to be very low level. However, there was an evacuation of 140,000 pregnant women and pre-school age children from the area. The accident crystallized anti-nuclear safety concerns among activists and the general public, resulted in new regulations for the nuclear industry, and has been cited as a contributor to the decline of new reactor construction that was already underway in the 1970s. New reactor designs The nuclear power industry has moved to improve engineering design. Generation IV reactors are now in late stage design and development to improve safety, sustainability, efficiency, and cost. Key to the latest designs is the concept of passive nuclear safety. Passive nuclear safety does not require operator actions or electronic feedback in order to shut down safely in the event of a particular type of emergency (usually overheating resulting from a loss of coolant or loss of coolant flow). This is in contrast to older-yet-common reactor designs, where the natural tendency for the reaction was to accelerate rapidly from increased temperatures. In such a case, cooling systems must be operative to prevent meltdown. Past design mistakes like Fukushima in Japan did not anticipate that a tsunami generated by an earthquake would disable the backup systems that were supposed to stabilize the reactor after the earthquake. New reactors with passive nuclear safety eliminate this failure mode. The United States Nuclear Regulatory Commission has formally engaged in pre-application activities with four applicants who have Generation IV reactors. Of those four applicants' designs, two are molten salt reactors, one is a compact fast reactor, and one is a Modular High temperature gas-cooled reactor. Health Health effects on population near nuclear power plants and workers A major concern in the nuclear debate is what the long-term effects of living near or working in a nuclear power station are. These concerns typically center on the potential for increased risks of cancer. However, studies conducted by non-profit, neutral agencies have found no compelling evidence of correlation between nuclear power and risk of cancer.There has been considerable research done on the effect of low-level radiation on humans. Debate on the applicability of Linear no-threshold model versus Radiation hormesis and other competing models continues, however, the predicted low rate of cancer with low dose means that large sample sizes are required in order to make meaningful conclusions. A study conducted by the National Academy of Sciences found that carcinogenic effects of radiation does increase with dose. The largest study on nuclear industry workers in history involved nearly a half-million individuals and concluded that a 1–2% of cancer deaths were likely due to occupational dose. This was on the high range of what theory predicted by LNT, but was "statistically compatible".The Nuclear Regulatory Commission (NRC) has a factsheet that outlines 6 different studies. In 1990 the United States Congress requested the National Cancer Institute to conduct a study of cancer mortality rates around nuclear plants and other facilities covering 1950 to 1984 focusing on the change after operation started of the respective facilities. They concluded in no link. In 2000 the University of Pittsburgh found no link to heightened cancer deaths in people living within 5 miles of plant at the time of the Three Mile Island accident. The same year, the Illinois Public Health Department found no statistical abnormality of childhood cancers in counties with nuclear plants. In 2001 the Connecticut Academy of Science and Engineering confirmed that radiation emissions were negligibly low at the Connecticut Yankee Nuclear Power Plant. Also that year, the American Cancer Society investigated cancer clusters around nuclear plants and concluded no link to radiation noting that cancer clusters occur regularly due to unrelated reasons. Again in 2001, the Florida Bureau of Environmental Epidemiology reviewed claims of increased cancer rates in counties with nuclear plants, however, using the same data as the claimants, they observed no abnormalities.Scientists learned about exposure to high level radiation from studies of the effects of bombing populations at Hiroshima and Nagasaki. However, it is difficult to trace the relationship of low level radiation exposure to resulting cancers and mutations. This is because the latency period between exposure and effect can be 25 years or more for cancer and a generation or more for genetic damage. Since nuclear generating plants have a brief history, it is early to judge the effects.Most human exposure to radiation comes from natural background radiation. Natural sources of radiation amount to an average annual radiation dose of 295 millirems (0.00295 sieverts). The average person receives about 53 mrem (0.00053 Sv) from medical procedures and 10 mrem from consumer products per year, as of May 2011. According to the National Safety Council, people living within 50 miles (80 km) of a nuclear power plant receive an additional 0.01 mrem per year. Living within 50 miles of a coal plant adds 0.03 mrem per year.In its 2000 report, "Sources and effects of ionizing radiation", the UNSCEAR also gives some values for areas where the radiation background is very high. You can for example have some value like 370 nanograys per hour (0.32 rad/a) on average in Yangjiang, China (meaning 3.24 mSv per year or 324 mrem), or 1,800 nGy/h (1.6 rad/a) in Kerala, India (meaning 15.8 mSv per year or 1580 mrem). They are also some other "hot spots", with some maximum values of 17,000 nGy/h (15 rad/a) in the hot springs of Ramsar, Iran (that would be equivalent to 149 mSv per year pr 14,900 mrem per year). The highest background seem to be in Guarapari with a reported 175 mSv per year (or 17,500 mrem per year), and 90,000 nGy/h (79 rad/a) maximum value given in the UNSCEAR report (on the beaches). A study made on the Kerala radiation background, using a cohort of 385,103 residents, concludes that "showed no excess cancer risk from exposure to terrestrial gamma radiation" and that "Although the statistical power of the study might not be adequate due to the low dose, our cancer incidence study [...] suggests it is unlikely that estimates of risk at low doses are substantially greater than currently believed."Current guidelines established by the NRC, require extensive emergency planning, between nuclear power plants, Federal Emergency Management Agency (FEMA), and the local governments. Plans call for different zones, defined by distance from the plant and prevailing weather conditions and protective actions. In the reference cited, the plans detail different categories of emergencies and the protective actions including possible evacuation.A German study on childhood cancer in the vicinity of nuclear power plants called "the KiKK study" was published in December 2007. According to Ian Fairlie, it "resulted in a public outcry and media debate in Germany which has received little attention elsewhere". It has been established "partly as a result of an earlier study by Körblein and Hoffmann which had found statistically significant increases in solid cancers (54%), and in leukemia (76%) in children aged less than 5 within 5 km (3.1 mi) of 15 German nuclear power plant sites. It red a 2.2-fold increase in leukemias and a 1.6-fold increase in solid (mainly embryonal) cancers among children living within 5 km of all German nuclear power stations." In 2011 a new study of the KiKK data was incorporated into an assessment by the Committee on Medical Aspects of Radiation in the Environment (COMARE) of the incidence of childhood leukemia around British nuclear power plants. It found that the control sample of population used for comparison in the German study may have been incorrectly selected and other possible contributory factors, such as socio-economic ranking, were not taken into consideration. The committee concluded that there is no significant evidence of an association between risk of childhood leukemia (in under 5-year olds) and living in proximity to a nuclear power plant.European Commission Joint Research Centre report of 2021 (see above) concluded: The average annual exposure to a member of the public, due to effects attributable to nuclear energy-based electricity production is about 0.2 microsievert, which is ten thousand times less than the average annual dose due to the natural background radiation. According to the LCIA (Life Cycle Impact Analysis) studies analysed in Chapter 3.4 of Part A, the total impact on human health of both the radiological and non-radiological emissions from the nuclear energy chain are comparable with the human health impact from offshore wind energy. Safety culture in host nations Some developing countries which plan to go nuclear have very poor industrial safety records and problems with political corruption. Inside China, and outside the country, the speed of the nuclear construction program has raised safety concerns. Prof. He Zuoxiu, who was involved with China's atomic bomb program, has said that plans to expand production of nuclear energy twentyfold by 2030 could be disastrous, as China was seriously underprepared on the safety front. China's fast-expanding nuclear sector is opting for cheap technology that "will be 100 years old by the time dozens of its reactors reach the end of their lifespans", according to diplomatic cables from the US embassy in Beijing. The rush to build new nuclear power plants may "create problems for effective management, operation and regulatory oversight" with the biggest potential bottleneck being human resources—"coming up with enough trained personnel to build and operate all of these new plants, as well as regulate the industry". The challenge for the government and nuclear companies is to "keep an eye on a growing army of contractors and subcontractors who may be tempted to cut corners". China is advised to maintain nuclear safeguards in a business culture where quality and safety are sometimes sacrificed in favor of cost-cutting, profits, and corruption. China has asked for international assistance in training more nuclear power plant inspectors. Nuclear proliferation and terrorism concerns Opposition to nuclear power is frequently linked to opposition to nuclear weapons. Anti-nuclear scientist Mark Z. Jacobson, believes the growth of nuclear power has "historically increased the ability of nations to obtain or enrich uranium for nuclear weapons". However, many countries have civilian nuclear power programs, while not developing nuclear weapons, and all civilian reactors are covered by IAEA non-proliferation safeguards, including international inspections at the plants. Iran has developed a nuclear power program under IAEA treaty controls, and attempted to develop a parallel nuclear weapons program in strict separation of the latter to avoid IAEA inspections. Modern light water reactors used in most civilian nuclear power plants cannot be used to produce weapons-grade uranium.A 1993–2013 Megatons to Megawatts Program successfully led to recycling 500 tonnes of Russian warhead-grade high-enriched uranium (equivalent to 20,008 nuclear warheads) to low-enriched uranium used as fuel for civilian power plants and was the most successful non-proliferation program in history.Four AP1000 reactors, which were designed by the American Westinghouse Electric Company are currently, as of 2011, being built in China and a further two AP1000 reactors are to be built in the US. Hyperion Power Generation, which is designing modular reactor assemblies that are proliferation resistant, is a privately owned US corporation, as is Terrapower which has the financial backing of Bill Gates and his Bill & Melinda Gates Foundation. Vulnerability of plants to attack Development of covert and hostile nuclear installations was occasionally prevented by military operations in what is described as "radical counter-proliferation" activities. Operation Gunnerside (1943), by Allies, against heavy water factory in German-occupied Norway Operation Scorch Sword (1980), by Iran, against construction site of Osirak nuclear complex construction site in Iraq. Operation Opera (1981), by Israel, against the same Osirak site in Iraq. Iraqi air force attacks on unfinished Bushehr nuclear plant in Iran during Iraq-Iran war (1986, 1987). Operation Outside the Box (2007), by Israel, against a suspected Al Kibar nuclear construction site in Syria.No military operations were targeted against live nuclear reactors and no operations resulted in nuclear incidents. No terrorist attacks targeted live reactors, with the only recorded quasi-terrorist attacks on a nuclear power plant construction sites by anti-nuclear activists: 1977–1982 ETA performed numerous attacks, including bombings and kidnappings, against Lemóniz Nuclear Power Plant constructions site and its personnel 18 January 1982 when environmental activist Chaïm Nissim fired RPG rockets at Superphénix reactor construction site in France, causing no damageAccording to a 2004 report by the U.S. Congressional Budget Office, "The human, environmental, and economic costs from a successful attack on a nuclear power plant that results in the release of substantial quantities of radioactive material to the environment could be great." The United States 9/11 Commission has said that nuclear power plants were potential targets originally considered for the 11 September 2001 attacks. If terrorist groups could sufficiently damage safety systems to cause a core meltdown at a nuclear power plant, and/or sufficiently damage spent fuel pools, such an attack could lead to a widespread radioactive contamination.New reactor designs have features of passive safety, such as the flooding of the reactor core without active intervention by reactor operators. But these safety measures have generally been developed and studied with respect to accidents, not to the deliberate reactor attack by a terrorist group. However, the US Nuclear Regulatory Commission now also requires new reactor license applications to consider security during the design stage. Use of waste byproduct as a weapon There is a concern if the by-products of nuclear fission (the nuclear waste generated by the plant) were to be left unprotected it could be stolen and used as a radiological weapon, colloquially known as a "dirty bomb". No actual terrorist attacks involving "dirty bomb" were ever recorded, although cases of illegal trade of fissile material happened.There are additional concerns that the transportation of nuclear waste along roadways or railways opens it up for potential theft. The United Nations has since called upon world leaders to improve security in order to prevent radioactive material falling into the hands of terrorists, and such fears have been used as justifications for centralized, permanent, and secure waste repositories and increased security along transportation routes.The spent fissile fuel is not radioactive enough to create any sort of effective nuclear weapon, in a traditional sense where the radioactive material is the means of explosion. Nuclear reprocessing plants also acquire uranium from spent reactor fuel and take the remaining waste into their custody. Public opinion Support for nuclear power varies between countries and has changed significantly over time. Trends and future prospects Following the Fukushima Daiichi nuclear disaster, the International Energy Agency halved its estimate of additional nuclear generating capacity to be built by 2035. Platts has reported that "the crisis at Japan's Fukushima nuclear plants has prompted leading energy-consuming countries to review the safety of their existing reactors and cast doubt on the speed and scale of planned expansions around the world". In 2011, The Economist reported that nuclear power "looks dangerous, unpopular, expensive and risky", and that "it is replaceable with relative ease and could be forgone with no huge structural shifts in the way the world works".In September 2011, German engineering giant Siemens announced it will withdraw entirely from the nuclear industry, as a response to the Fukushima nuclear disaster in Japan. The company is to boost its work in the renewable energy sector. Commenting on the German government's policy to close nuclear plants, Werner Sinn, president of the Ifo Institute for Economic Research at the University of Munich, stated: "It is wrong to shut down the atomic power plants, because this is a cheap source of energy, and wind and solar power are by no means able to provide a replacement. They are much more expensive, and the energy that comes out is of inferior quality. Energy-intensive industries will move out, and the competitiveness of the German manufacturing sector will be reduced or wages will be depressed."But with regard to the proposition that "Improved communication by industry might help to overcome current fears regarding nuclear power", Princeton University Physicist M. V. Ramana says that the basic problem is that there is "distrust of the social institutions that manage nuclear energy", and a 2001 survey by the European Commission found that "only 10.1 percent of Europeans trusted the nuclear industry". This public distrust is periodically reinforced by safety violations by nuclear companies, or through ineffectiveness or corruption on the part of nuclear regulatory authorities. Once lost, says Ramana, trust is extremely difficult to regain. Faced with public antipathy, the nuclear industry has "tried a variety of strategies to persuade the public to accept nuclear power", including the publication of numerous "fact sheets" that discuss issues of public concern. Ramana says that none of these strategies have been very successful.In March 2012, E.ON UK and RWE npower announced they would be pulling out of developing new nuclear power plants in the UK, placing the future of nuclear power in the UK in doubt. More recently, Centrica (who own British Gas) pulled out of the race on 4 February 2013 by letting go its 20% option on four new nuclear plants. Cumbria county council (a local authority) turned down an application for a final waste repository on 30 January 2013 – there is currently no alternative site on offer.In terms of current nuclear status and future prospects: Ten new reactors were connected to the grid, In 2015, the highest number since 1990, but expanding Asian nuclear programs are balanced by retirements of aging plants and nuclear reactor phase-outs. Seven reactors were permanently shut down. 441 operational reactors had a worldwide net capacity of 382,855 megawatts of electricity in 2015. However, some reactors are classified as operational, but are not producing any power. 67 new nuclear reactors were under construction in 2015, including four EPR units. The first two EPR projects, in Finland and France, were meant to lead a nuclear renaissance but both are facing costly construction delays. Construction commenced on two Chinese EPR units in 2009 and 2010. The Chinese units were to start operation in 2014 and 2015, but the Chinese government halted construction because of safety concerns. China's National Nuclear Safety Administration carried out on-site inspections and issued a permit to proceed with function tests in 2016. Taishan 1 is expected to start up in the first half of 2017 and Taishan 2 is scheduled to begin operating by the end of 2017.In February 2020, the world's first open-source platform for the design, construction, and financing of nuclear power plants, OPEN100, was launched in the United States. This project aims to provide a clear pathway to a sustainable, low cost, zero-carbon future. Collaborators in the OPEN100 project include Framatome, Studsvik, the UK's National Nuclear Laboratory, Siemens, Pillsbury, the Electric Power Research Institute, the US Department of Energy's Idaho National Laboratory, and Oak Ridge National Laboratory.In October 2020, the U.S. Department of Energy announced selecting two U.S.-based teams to receive $160 million in initial funding under the new Advanced Reactor Demonstration Program (ARDP). TerraPower LLC (Bellevue, WA) and X-energy (Rockville, MD) were each awarded $80 million to build two advanced nuclear reactors that can be operational within seven years. See also Footnotes Sources Letcher, Trevor M., ed. (2020). Future Energy: Improved, Sustainable and Clean Options for our Planet (Third ed.). Elsevier. ISBN 978-0081028865. MacKay, David J. C. (2008). Sustainable energy – without the hot air. UIT Cambridge. ISBN 978-0954452933. OCLC 262888377. Archived from the original on 28 August 2021. IPCC (2014). Edenhofer, O.; Pichs-Madruga, R.; Sokona, Y.; Farahani, E.; et al. (eds.). Climate Change 2014: Mitigation of Climate Change: Working Group III contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. ISBN 978-1107058217. OCLC 892580682. Archived from the original on 26 January 2017. IPCC (2018). Masson-Delmotte, V.; Zhai, P.; Pörtner, H.-O.; Roberts, D.; et al. (eds.). Global Warming of 1.5 °C. An IPCC Special Report on the impacts of global warming of 1.5 °C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty (PDF) (Report). Archived (PDF) from the original on 20 November 2020. Further reading Ferguson, Charles D. (2007). Nuclear energy: balancing benefits and risks. Council on Foreign Relations. ISBN 978-0876094006. Ferguson, Charles D.; Marburger, Lindsey E.; Farmer, J. Doyne; Makhijani, Arjun (2010). "A US nuclear future?". Nature. 467 (7314): 391–393. Bibcode:2010Natur.467..391F. doi:10.1038/467391a. PMID 20864972. S2CID 4427192. Diaz-Maurin, François (2014). "Going beyond the Nuclear Controversy". Environmental Science & Technology. 48 (1): 25–26. Bibcode:2014EnST...48...25D. doi:10.1021/es405282z. PMID 24364822. Schneider, Mycle, Steve Thomas, Antony Froggatt, Doug Koplow (2016). The World Nuclear Industry Status Report: World Nuclear Industry Status as of 1 January 2016. External links The World Nuclear Industry Status Reports website Beyond Nuclear at Nuclear Policy Research Institute advocacy organization Greenpeace Nuclear Campaign World Information Service on Energy (WISE) "Critical Hour: Three Mile Island, The Nuclear Legacy, And National Security" (PDF). (929 KB) Online book "Natural Resources Defense Council" (PDF). (158 KB) The New York Times Finally Reports the Economic Disaster of New Nukes American Nuclear Society (ANS) Representing the People and Organisations of the Global Nuclear Profession SCK.CEN Belgian Nuclear Research Centre Nuclear Energy Institute (NEI) Atomic Insights Freedom for Fission The Nuclear Energy Option, online book by Bernard L. Cohen. Emphasis on risk estimates of nuclear. Fairewinds Energy Education Should we use nuclear energy? – Wikidebate on Wikiversity
phase-out of gas boilers
The phase-out of gas boilers is a set of policies to remove the use of fossil gas (or "natural gas") from the heating of buildings and use in appliances. Typically gas is used to heat water, for showering, or central heating. In many countries, gas heating represents one of the major contributors to greenhouse gas emissions and climate damage, leading a growing number of countries to introduce bans. Air source heat pumps are the main alternative.The International Energy Agency has said that new gas boilers should be banned no later than 2025. Many installations and appliances have a life-span of 25 years, leading for calls that the bans must take place immediately, or at latest by 2025, because otherwise targets of net zero by 2050 cannot or are unlikely to be reached. However fossil fuels lobbyists are resisting phase-out. List of gas boiler bans The following table lists different ban types in new or existing buildings. See also Fossil fuel phase-out Phase-out of fossil fuel vehicles Plastic bans Montreal Protocol == Notes ==
eren holding
Eren Holding is a conglomerate headquartered in Istanbul, Turkey. It has business interests in paper, packaging, cement, energy, retail and textiles. The holding company was established in 1997, although the history of the group dates back to 1969.Eren Holding's chairman is Ahmet Eren. The group employs 10,000 people. In 2022 Eren Enerji generated 5% of the country's electricity, second to EÜAS at 15%.: fig 1.10  Due to its coal-fired power stations subsidiary Eren Enerji is one of the largest private sector greenhouse gas emitters in Turkey. History Eren was established by four brothers from Bitlis. In 1969, Er-os Çamaşırları A.Ş., an underwear manufacturer and trademark was established. In 1998, Eren Holding entered the energy sector with Modern Enerji Elektrik Üretim Otoprodüktör Grubu A.Ş. In 2003, the Rixos Hotel Bodrum was put into service in Bodrum, Turkey, marking the company's entry into the tourism industry. In 2007, Eren Enerji started construction of a 1360 MW coal-fired power plant in Zonguldak, which was completed in 2010. In 2012, Eren Perakende created the multi-brand shoe concept SuperStep stores and multi-brand kids store chain SuperKids. In 2014, a 6 MW biomass power plant started operating. In 2015, Modern Enerji established the first solid waste incineration facility in Turkey. Also in 2015, Modern Karton completed construction of a new paper factory.In the late 2010s Eren Holding’s power plants generated 7.5% of Turkey’s electricity. In 2019, Eren Enerji Elektrik Üretim A.Ş received a silver award at the industry and energy category of the Green World Awards. That was criticized by environmental organizations as greenwashing. Operations Eren Holding controls businesses across several sectors, including energy, paper, cement, retail, ports, packaging and textiles. It owns ports in Zonguldak and Mersin. Its subsidiaries Eren Kağıt and Modern Karton collect waste paper and recycle it into corrugated fiberboard.Eren owns Turkey's biggest cement factory—Medcem Çimento—in Mersin. As local demand collapsed in 2019, the factory concentrates on exports.ErenTekstil A.Ş. manufactures cotton textiles. Eren Perakende represent a number of international brands in Turkey, including Lacoste, Burberry, GANT, Nautica and Converse. In addition to its physical stores, it sells online through Occasion, Sanal Çadır, SuperStep and FashFed e-commerce sites. Eren Holding's subsidiary Eren Enerji owns the coal-fired ZETES power stations. Another energy subsidiary, Modern Energy, owns a solid waste incineration facility and natural gas-fired and biomass-fired power plants in Çorlu. Greenhouse gas emissions Due to its coal-fired power stations, coal-fired steam boiler and cement factory in Silifke, Eren is one of the largest private sector greenhouse gas emitters in Turkey. As the largest private sector owner of coal-fired electricity generating capacity in Turkey the company is on the Global Coal Exit List,: Output sheet row 1025  and is one of the largest greenhouse gas emitters in the country: however although corporate emissions measurements are reported to the government they are not published. Climate Trace estimates Eren Energy’s coal-fired power plants emitted 15 million tons (2.7%) of the country’s total 560 million tons of greenhouse gas in 2021. References Sources "Seventh National Communication (version 2) of Turkey under the UNFCCC (this is also the third biennial report)" (PDF). Ministry of Environment and Urbanization. August 2019. External links "Eren Enerji" article Global Energy Monitor
midwestern greenhouse gas reduction accord
The Midwestern Greenhouse Gas Reduction Accord (Midwestern Accord) was a regional agreement by six governors of states in the US Midwest who are members of the Midwestern Governors Association (MGA), and the premier of one Canadian province, whose purpose is to reduce greenhouse gas emissions to combat climate change. The accord has been inactive since March 2010, when an advisory group presented a plan for action to the association with a scheduled implementation date of January 2012. Signatories to the accord are the U.S. states of Minnesota, Wisconsin, Illinois, Iowa, Michigan, Kansas, and the Canadian Province of Manitoba. Observers of the accord are Indiana, Ohio, and South Dakota, as well as the Canadian Province of Ontario. While the Midwest has intensive manufacturing and agriculture sectors, making it the most coal-dependent region in North America, it also has significant renewable energy resources and is particularly vulnerable to the climate change caused by burning coal and other fossil fuels.The Midwestern Accord was the fourth tier of the MGA Energy Security and Climate Stewardship Summit Platform, signed on November 15, 2007. It established the Midwestern Greenhouse Gas Reduction Program, which aimed to: establish greenhouse gas reduction targets and time frames consistent with signing states' targets; develop a market-based and multi-sector cap and trade mechanism to help achieve those reduction targets; establish a system to enable tracking, management, and crediting for entities that reduce greenhouse gas emissions; and develop and implement additional steps as needed to achieve the reduction targets, such as a low-carbon fuel standards and regional incentives and funding mechanisms.Through the Midwestern Accord, the governors agreed to establish a Midwestern greenhouse gas reduction program to reduce greenhouse gas emissions in their states, as well as a working group to provide recommendations regarding the implementation of the accord. In June 2009, the Midwestern Greenhouse Gas Reduction Accord Advisory Group finalized its draft recommendations. In March 2010 the advisory group presented a plan to the MGA that called for implementation beginning in January 2012. No further action was taken, as leadership in several of the states switched positions on climate policy.In July 2014, accord member Kansas and observers Indiana, South Dakota, and Ohio joined a lawsuit opposing the EPA Clean Power Plan, federal climate regulations which could be met by implementation of the accord.The MGGRA became defunct after the 2010 United States elections. See also Intergovernmental Panel on Climate Change List of climate change initiatives The Climate Registry Regional Greenhouse Gas Initiative Western Climate Initiative References External links Midwestern Greenhouse Gas Reduction Accord Web site MGA Energy Initiatives Midwestern Greenhouse Gas Accord 2007 MGA Energy Security and Climate Stewardship Platform for the Midwest
saudi aramco
Saudi Aramco (Arabic: أرامكو السعودية ʾArāmkū as-Suʿūdiyyah), officially the Saudi Arabian Oil Group or simply Aramco, is a petroleum and natural gas company that is the national oil company of Saudi Arabia. As of 2022, it is the second-largest company in the world by revenue and is headquartered in Dhahran. It has repeatedly achieved the largest annual profits in global corporate history. Saudi Aramco has both the world's second-largest proven crude oil reserves, at more than 270 billion barrels (43 billion cubic metres), and largest daily oil production of all oil-producing companies.Saudi Aramco operates the world's largest single hydrocarbon network, the Master Gas System. In 2013, its crude oil production total was 3.4 billion barrels (540 million cubic metres), and it manages over one hundred oil and gas fields in Saudi Arabia, including 288.4 trillion standard cubic feet (scf) of natural gas reserves. Along the Eastern Province, Saudi Aramco most notably operates the Ghawar Field (the world's largest onshore oil field) and the Safaniya Field (the world's largest offshore oil field).On 11 December 2019, the company's shares commenced trading on the Tadawul stock exchange. The shares rose to 35.2 Saudi riyals, giving it a market capitalisation of about US$1.88 trillion, and surpassed the US$2 trillion mark on the second day of trading. In the 2023 Forbes Global 2000, Saudi Aramco was ranked as the second-largest public company in the world. History Saudi Aramco's origins trace to the oil shortages of World War I and the exclusion of American companies from Mesopotamia by the United Kingdom and France under the San Remo Petroleum Agreement of 1920. The US administration had popular support for an "Open Door policy", which Herbert Hoover, secretary of commerce, initiated in 1921. Standard Oil of California (SoCal) was among those US companies seeking new sources of oil from abroad.Through its subsidiary company, the Bahrain Petroleum Co. (BAPCO), SoCal struck oil in Bahrain on May 30, 1932. This event heightened interest in the oil prospects of the Arabian mainland. On 29 May 1933, the Saudi Arabian government granted a concession to SoCal in preference to a rival bid from the Iraq Petroleum Co. The concession allowed SoCal to explore for oil in Saudi Arabia. SoCal assigned this concession to a wholly owned subsidiary, California-Arabian Standard Oil (CASOC). In 1936, with the company having had no success at locating oil, the Texas Company (Texaco) purchased a 50% stake of the concession. After four years of fruitless exploration, the first success came with the seventh drill site in Dhahran in 1938, a well referred to as Dammam No. 7. This well immediately produced over 1,500 barrels per day (240 m3/d), giving the company confidence to continue. On 31 January 1944, the company name was changed from California-Arabian Standard Oil Co. to Arabian American Oil Co. (or Aramco). In 1948, Standard Oil of New Jersey (later known as Exxon) purchased 30% and Socony Vacuum (later Mobil) purchased 10% of the company, with SoCal and Texaco retaining 30% each. The newcomers were also shareholders in the Iraq Petroleum Co. and had to get the restrictions of the Red Line Agreement lifted in order to be free to enter into this arrangement. In 1949, ARAMCO had made incursions into the Emirate of Abu Dhabi, leading to a border dispute between Abu Dhabi and Saudi Arabia. In 1950, King Abdulaziz threatened to nationalize his country's oil facilities, thus pressuring Aramco to agree to share profits 50/50.A similar process had taken place with American oil companies in Venezuela a few years earlier. The American government granted US Aramco member companies a tax break known as the golden gimmick equivalent to the profits given to King Abdulaziz. In the wake of the new arrangement, the company's headquarters were moved from New York to Dhahran. In 1951, the company discovered the Safaniya Oil Field, the world's largest offshore field. In 1957, the discovery of smaller connected oil fields confirmed the Ghawar Field as the world's largest onshore field.In 1975, the Saudi Arabia second five-year economic plan included a Master Gas Plan. Natural gas would be used to generate power, rather than flaring the gas. The plan counted on using the associated gas, but by 1985, Aramco was able to include a billion standard cubic foot per day (Bscfd) of non-associated gas. This non-associated gas was produced from the Kuff Formation, which is a limestone layer 650 metres (2,130 ft) below the oil producing Arab Zone. In 1994, Aramco discovered more non-associated gas in the deeper Jawf sandstone formation, and built plants in Hawiyah and Haradh to process it. This increased the capacity of the Master Gas System to 9.4 billion scfd.: 98–100, 104, 129–130, 229 Yom Kippur War In 1973, following US support for Israel during the Yom Kippur War, the Saudi Arabian government acquired a 25% "participation interest" in Aramco's assets. It increased its participation interest to 60% in 1974 and acquired the remaining 40% interest in 1976. Aramco continued to operate and manage the former Aramco assets, including its concessionary interest in certain Saudi Arab oil fields, on behalf of the Saudi Arab Government until 1988. In November 1988, a royal decree created a new Saudi Arab company, the Saudi Arabian Oil Company, to take control of the former Aramco assets (or Saudi Aramco) and took the management and operations control of Saudi Arabia's oil and gas fields from Aramco and its partners. In 1989–90, high-quality oil and gas were discovered in three areas south of Riyadh: the Raghib area about 77 miles (124 km) southeast of the capital. Persian Gulf War In September 1990, after the start of the Persian Gulf War, Aramco was expected to replace much of the oil production removed from the global market due to the embargo of Iraq and occupied Kuwait. This amounted to producing an extra 4.8 million barrels per day (Mbpd) to keep the global oil market stable. In addition, Aramco was expected to provide all of the coalition aviation and diesel needs. Aramco recommissioned 146 Harmaliyah, Khurais, and Ghawar oil wells with associated gas oil separation plants, and saltwater treatment pipeline, that had been mothballed during the 1980s oil price collapse. Daily production increased from 5.4 Mbpd in July to 8.5 Mbpd in December 1990 after a three-month de-mothball effort.: 125, 135, 148–149, 155–156 Starting in 1990, Aramco embarked on an expansion of crude oil sales in the Asian market. Agreements with South Korea, the Philippines, and China resulted. By 2016, about 70% of Aramco's crude oil sales were to Asia.: 168, 176, 184–185 2000s In May 2001, Saudi Arabia announced the Gas Initiative, which proposed forming three joint ventures with eight IOCs for gas exploration on pure upstream acreage. Core Venture 1 included south Ghawar and north Rub' Al-Khali, Core Venture 2 included the Red Sea, while Core Venture 3 involved Shaybah and Kidan. In 2003, Royal Dutch Shell and TotalEnergies formed a partnership with Saudi Aramco in Core Venture 3. In 2004, Core Venture 1 became three separate joint ventures with Saudi Aramco holding 20%, one with Lukoil, a second with Sinopec, and a third with Repsol.: 228–232 By 2004, Aramco was producing 8.6 million barrels per day (mbpd) out of a potential 10 mbpd. In 2005, Aramco launched a five-year plan to spend US$50 billion to increase their daily capacity to 12.5 mbpd by increasing production and refining capacity and doubling the number of drilling rigs.: 241–242 In 2005, Saudi Aramco was the world's largest company with an estimated market value of US$781 billion.In June 2008, in response to crude oil prices exceeding US$130 a barrel, Aramco announced it would increase production to 9.7 million barrels per day (mbpd). Then as prices plummeted, Aramco stated in January 2009, that it would reduce production to 7.7 mbpd.: 265–267 In 2011, Saudi Aramco started production from the Karan Gas Field, with an output of more than 400 million scf per day.In January 2016, the Deputy Crown Prince of Saudi Arabia, Mohammad bin Salman Al Saud, announced he was considering listing shares of the state-owned company, and selling around 5% of them in order to build a large sovereign wealth fund.On 26 April 2017, Saudi security forces thwarted an attempted attack on an Aramco oil distribution center involving an unmanned boat from Yemen.The Wall Street Journal reported in September 2018, Aramco was considering a US$1 billion venture-capital fund to invest in international technology firms.In June 2019, a report by Financial Times claimed that Aramco had been bearing the ministry-related expenses; boosting the finance ministry budget allocation. It also included Energy Minister Khalid Al Falih’s company-related and diplomatic trips, as well as his stays in luxurious hotels. However, an ally mentioned that Falih’s policies have delivered additional oil revenues that far exceeded his expenses.In September 2019, Saudi Arabia appointed Yasir Al-Rumayyan as the Chairman of Aramco. Al-Rumayyan became head of the country’s sovereign wealth fund by replacing Khalid Al-Falih, who was holding the position since 2015. 2012 cyber attack Aramco computers were attacked by a virus on 15 August 2012. The following day Aramco announced that none of the infected computers were part of the network directly tied to oil production, and that the company would soon resume full operations. Hackers claimed responsibility for the spread of the computer virus. The virus hit companies within the oil and energy sectors. A group named "Cutting Sword of Justice" claimed responsibility for an attack on 30,000 Saudi Aramco workstations, causing the company to spend months restoring their services. The group later indicated that the Shamoon virus had been used in the attack. Due to this attack, the main site of Aramco went down and a message came to the home page apologizing to customers. Computer security specialists said that "The attack, known as Shamoon, is said to have hit "at least one organization" in the sector. Shamoon is capable of wiping files and rendering several computers on a network unusable." Richard Clarke suggests the attack was part of Iran's retaliation for the US involvement in Stuxnet. Security researcher Chris Kubecka, who helped the company establish security after the attack, detailed the level of sophistication in her Black Hat USA 2015 presentation and episode 30 of Darknet Diaries. 2019 drone attack On 14 September 2019, there was a drone attack on two Saudi Aramco plants: the Abqaiq oil processing facility and Khurais oil field. Houthi rebels claimed responsibility for the attack. The attack cut 5.7 million barrels per day (bpd) of Saudi crude output, over 5% of the world's supply. There were discussions by Saudi Arabian officials on postponing Aramco's IPO, because the attacks "sidelined more than half of the kingdom's output" of oil. 2019 Initial public offering (IPO) Since around 2018, Saudi Arabia had been considering to put a portion of Saudi Aramco's ownership, up to 5%, onto public trading via a staged initial public offering (IPO), as to reduce the cost to the government of running the company. While the IPO had been vetted by major banks, the IPO was delayed over concerns of Aramco's corporate structure through 2018 into 2019. The September 2019 drone attacks on Aramco's facilities also delayed the onset of the IPO.On 9 April 2019, Aramco issued bonds collectively valued at US$12 billion. Its first international bond issue received more than US$100 billion in orders from foreign investors, which breaks all records for a bond issue by an emerging market entity.Aramco announced on Sunday 3 November 2019 its plan to list 1.5% of its value as an IPO on the Tadawul stock exchange.On 9 November 2019, Saudi Aramco released a 600-page prospectus giving details of the IPO. According to the specifications provided, up to 0.5% of the shares were locked for individual retail investors.On 4 December 2019, Saudi Aramco priced its offering at 32 Saudi riyals (approximately US$8.53 at the time) per share. The company generated subscriptions of total amount equals US$119 billion representing 456% of total offer shares. It raised US$25.6 billion in its IPO, making it the world's largest IPO, succeeding that of the Alibaba Group in 2014. The company commenced trading on Tadawul on 11 December 2019, with shares rising 10% to 35.2 riyals, giving the company a market capitalisation of about US$1.88 trillion, and making Saudi Aramco the world's largest listed company. The entire Tadawul has a market capitalisation of US$2.22 trillion. Global Medium Term Note Programme According to a bourse filing made by Aramco, the like of Goldman Sachs, HSBC, Morgan Stanley, JPMorgan, and NCB Capital were hired by the company for organizing investor calls prior to the planned transaction. The document published by one of the other banks said to be involved in the deal showed that the deal included BNP Paribas, MUFG, BofA Securities, SMBC Nikko, First Abu Dhabi Bank, Societe Generale, and BOC International. The company has reported a fall in the net profit of its third-quarter for November 2020, due to increased crude prices and a drop in its demand following the COVID-19 pandemic. 2020s On 10 March 2020 Saudi Aramco announced a global partnership with Formula One landing a multi year deal.On 17 June 2020, Saudi Aramco acquired a 70% share in SABIC, a chemicals manufacturing company.In June 2020, Saudi Aramco laid off nearly 500 of its more than 70,000 employees, as global energy firms reduced their workforce due to the COVID-19 pandemic. Most of the workers who lost their job at Aramco were foreigners.On 31 July 2020, Saudi Aramco lost its title as the world’s largest listed company by market capitalization to Apple.On 9 August 2020, Saudi Aramco reported a 50% fall in net income for the first half of its financial year, as demand for oil and prices continued to fall due to the coronavirus crisis.On 3 November 2020, Saudi Aramco reported a 44.6% drop in third-quarter net profit amid the COVID-19 pandemic.On 14 December 2020, Saudi state TV announced that an oil tanker carrying over 60,000 metric tons of unleaded gasoline from an Aramco refinery at Yanbu, had been attacked by a smaller boat rigged with explosives.In March 2021, Saudi Aramco announced that earnings in 2020 fell by nearly 45% compared with 2019, as lockdowns around the world following the COVID-19 pandemic curbed demand for oil.On 19 March 2021, an Aramco refinery was attacked by six bomb-laden drones. The attack, which was claimed by Houthi rebels, started a fire but caused no injuries or damage, according to the official Saudi Press Agency.On 21 March 2021, Saudi Aramco signed an agreement to secure China's energy supplies for the next 50 years, and also to develop new technologies to combat climate change. More recently, they signed a deal with a consortium led by EIG.In July 2021, Saudi Aramco appointed former HSBC Holdings Plc Chief Executive Officer Stuart Gulliver to the company's board of directors.In October 2021, Saudi Aramco announced plans to achieve net-zero carbon emissions from its wholly-owned operations by 2060.On 20 November 2021, Houthi fighters took credit for launching 14 drones at military targets in Riyadh, Abha, Jizan, Najran, and Jeddah, and Aramco's refineries in Jeddah.In 2021, The Guardian reported that Aramco was not trying to diversify at the same rate as other oil companies, such as Shell and BP. Rather, Aramco announced in 2021 that the company intended to increase crude capacity from 12m barrels a day to 13m barrels by 2027.In February 2022, following crude's ascent to nearly $95 per barrel, Saudi Arabia's Aramco boosted oil prices for clients in Asia, the United States, and Europe.In March 2022, Houthi fighters attacked an Aramco storage site in Jeddah causing a fire in two storage tanks. The incident occurred during qualifying for the 2022 Saudi Arabian Grand Prix. On 11 May 2022, Saudi Aramco became the largest (most valuable) company in the world by market cap, surpassing Apple Inc.In August 2022, Saudi Aramco announced that it would acquire Valvoline's petroleum unit for $2.65 billion.In March 2023, Saudi Aramco announced that they had seen record profits of $161 billion as prices of petrol soared following the COVID-19 pandemic. The figures eclipsed the numbers posted by ExxonMobil and Shell, who reported $55.7 and $39.9 billion in profit respectively.In September 2023, it was announced Saudi Aramco had reached agreement with the Latin American private equity fund, Southern Cross Group to acquire the Santiago-headquartered fuel retailer, Esmax Distribucion SPA. The acquisition marked Saudi Aramco's entry into the South American fuel retail market. Operation Saudi Aramco is headquartered in Dhahran, but its operations span the globe and include exploration, production, refining, chemicals, distribution and marketing. All these activities of the company are monitored by the Saudi Arabian Ministry of Petroleum and Mineral Resources together with the Supreme Council for Petroleum and Minerals. However, the ministry has much more responsibility in this regard than the council. Board of directors Yasir Othman Al-Rumayyan (chairman), member of the Council of Economic and Development Affairs (Saudi Arabia) Ibrahim Abdulaziz Al-Assaf, former Minister of Foreign Affairs and Minister of Finance Mohammed Al-Jadaan, current Minister of Finance Mohammad M. Al-Tuwaijri, former Minister of Economy and Planning Nabil Al-Amoudi, former Minister of Transport Mark Moody-Stuart, former group managing director and chairman of Royal Dutch Shell, Anglo American, HSBC, and the Foundation for the United Nations Global Compact Andrew N. Liveris, former chairman and CEO of Dow Chemical Lynn Elsenhans, former chairwoman and CEO of Sunoco Peter Cella, former president and CEO of Chevron Philips Chemical Mark Weinberger, former chairman and CEO of Ernst & Young Amin H. Nasser, president and CEO of Saudi Aramco Exploration A significant portion of the Saudi Aramco workforce consists of geophysicists and geologists. Saudi Aramco has been exploring for oil and gas reservoirs since 1982. Most of this process takes place at the EXPEC Advanced Research Center. Originally, Saudi Aramco used Cray Supercomputers (CRAY-1M) in its EXPEC Advanced Research Center (ECC) to assist in processing the colossal quantity of data obtained during exploration and in 2001, ECC decided to use Linux clusters as a replacement for the decommissioned Cray systems. ECC installed a new supercomputing system in late 2009 with a disk storage capacity of 1,050 terabytes (i.e, exceeding one petabyte), the largest storage installation in Saudi Aramco's history to support its exploration in the frontier areas and the Red Sea. Refining and chemicals While the company did not originally plan on refining oil, the Saudi government wished to have only one company dealing with oil production. Therefore, on 1 July 1993, the government issued a royal decree merging Saudi Aramco with Samarec, the country's oil refining company. The following year, a Saudi Aramco subsidiary acquired a 40% equity interest in Petron Corporation, the largest crude oil refiner and marketer in the Philippines. Since then, Saudi Aramco has taken on the responsibility of refining oil and distributing it in the country. In 2008, Saudi Aramco sold its entire stake to the Ashmore Group, a London-listed investment group. Ashmore acquired an additional 11% when it made a required tender offer to other shareholders. By July 2008, Ashmore, through its SEA Refinery Holdings B.V., had a 50.57% of Petron's stock. Ashmore's payment was made in December 2008. In December 2008, Ashmore acquired PNOC's 40% stake. In the same month, San Miguel Corporation (SMC) said it was in the final stages of negotiations with the Ashmore Group to buy up to 50.1% of Petron. In 2010, SMC acquired majority control of Petron Corporation. Currently, Saudi Aramco's refining capacity is 5.4 million barrels per day (860,000 m3/d) (International joint and equity ventures: 2,500 Mbbl/d (400,000,000 m3/d), domestic joint ventures: 1,900 mpbd, and wholly owned domestic operations: 1,000 Mbbl/d (160,000,000 m3/d).)Saudi Aramco's downstream operations are shifting emphasis to integrate refineries with petrochemical facilities. Their first venture into it is with Petro Rabigh, which is a joint venture with Sumitomo Chemical Co. that began in 2005 on the coast of the Red Sea. In order to become a global leader in chemicals, Aramco will acquire 50% of Royal Dutch Shell's stake in their refiner in Saudi Arabia for US$631 million. List of refineries List of domestic refineries: Jazan Refinery and terminal projects (JRTP) (400,000 bbl/d (64,000 m3/d)), Jazan construction is ongoing. Jeddah Refinery (78,000 bbl/d (12,400 m3/d)) Jeddah converted to product storage terminal in November 2017. Ras Tanura Refinery (550,000 bbl/d (87,000 m3/d)) (includes a Crude Distillation Unit, a Gas Condensate Unit, a hydrocracker, and catalytic reforming) The Saudi Aramco Jubail Refinery Co. (SASREF), Jubail (305,000 bbl/d (48,500 m3/d)) Riyadh Refinery (126,000 bbl/d (20,000 m3/d)) Yanbu Refinery (245,000 bbl/d (39,000 m3/d))List of domestic refining ventures: The Saudi Aramco Mobil Refinery Co. Ltd. (SAMREF), Yanbu (400,000 bbl/d (64,000 m3/d)) Petro Rabigh, Rabigh (400,000 bbl/d (64,000 m3/d)) Saudi Aramco Base Oil Co. (Luberef) Saudi Aramco Total Refining and Petrochemical Co. (SATORP), Jubail (400,000 bbl/d (64,000 m3/d)) Yanbu Aramco Sinopec Refinery (YASREF), Yanbu (400,000 bbl/d (64,000 m3/d))List of international refining ventures: Fujian Refining and Petrochemical Co. (FRPC), People's Republic of China Sinopec SenMei (Fujian) Petroleum Co. Ltd. (SSPC), People's Republic of China Motiva Enterprises LLC, United States, Port Arthur Texas 635,000 bbl/d (101,000 m3/d) Showa Shell, Japan 445,000 bbl/d (70,700 m3/d) S-Oil, Republic of Korea 669,000 bbl/d (106,400 m3/d) Saudi Refining Inc., United States Reliance Industries, (no investment) IndiaSaudi Aramco at one point had been exploring projects in Pakistan, including a $10 billion refinery project in Gwandar which has since been cancelled. In 2022 it was revealed that Saudi Aramco was creating a joint venture with North Huajin Chemical Industries group to create a new company called Huajin Aramco Petrochemical Company which would develop a 300,000 bpd refining facility with ethylene steam cracking capabilities. Shipping Saudi Aramco has employed several tankers to ship crude oil, refined oil, and natural gas to various countries. It used to have its own created subsidiary company, Vela International Marine, which was merged with Bahri company, to handle shipping to North America, Europe, and Asia. It is a stakeholder in the King Salman Global Maritime Industries Complex, a shipyard that will be the largest in the world when complete. Global investment Saudi Aramco expanded its presence worldwide to include the three major global energy markets of Asia, Europe, and North America. In April 2019, Aramco has signed a deal to acquire a 13% stake in South Korean oil refiner Hyundai Oilbank for US$1.24 billion. Moreover, on 11 April 2019, Aramco signed an agreement with Poland’s leading oil refiner PKN Orlen to supply it with Arabian Crude Oil. Liquefied natural gas Aramco is planning to be a major producer of liquefied natural gas (LNG) in the world. It sold its first cargo of LNG from Singapore to an Indian buyer. The company is looking globally for potential joint ventures and partnerships to achieve its goal regarding LNG market. Saudization The original concession agreement included Article 23; as Ali Al-Naimi pointed out, this was a "key building block in the shaping of Saudi society for decades to come." It reads, "The enterprise under this contract shall be directed and supervised by Americans who shall employ Saudi nationals as far as practicable, and in so far as the company can find suitable Saudi employees it will not employ other nationals." The first company school was started in May 1940 in the Al-Khobar home of Hijji bin Jassim, company interpreter, translator and first instructor. Al-Naimi pointed out, "From the beginning, the development of Aramco was directly tied to the betterment of Saudi Arabia." Another school was located in Dhahran in 1941, and was called the Jebel School. Boys hired into entry-level positions attended at 7 AM for four hours, followed by four hours of work in the afternoon. In 1950, Aramco built schools for 2,400 students. In 1959, Aramco sent the first group of Saudi students to college in the States. In 1970, Aramco started hiring its first high school graduates, and in 1979 started offering college scholarships. In 1965, Zafer H. Husseini was named the first Saudi manager and in 1974, Faisal Al-Bassam was named the first Saudi vice president. One of the early students was Al-Naimi, who was named the first Saudi president of Aramco in November 1983. As Al-Naimi states, "The oil company committed itself to developing qualified Saudis to become fully educated and trained industry professionals." Al-Naimi acknowledged Thomas Barger's championing of Saudization, "You, of all of Aramco's leaders, had the greatest vision when you supported the training effort of Saudi Arab employees during its early days. That visionary support and effort is bearing fruit now and many executive positions are filled by Saudis because of that effort." In 1943, 1,600 Saudis were employed at Aramco, but by 1987, nearly two-thirds of Aramco's 43,500 strong workforce were Saudis. In 1988, Al-Naimi became CEO and Hisham Nazer became chairman, the first Saudis to hold those positions. The "pinnacle of Saudization" occurred when the Shaybah oil field came on line in July 1998, after a three-year effort by a team consisting of 90% Saudis. The Aramco of 2016 still maintained an expatriate workforce of about 15%, so Aramco can, in the words of Al-Naimi, "be sure it is getting access to the latest innovations and technical expertise."Saudi Aramco has emitted 59.26 billion tonnes of carbon dioxide equivalent since it began operations, accounting for 4.38% of worldwide anthropogenic CO2 emissions since 1965.In a letter sent to nine international banks reportedly hired by Aramco to assist it in arranging its US$2 trillion market debut, ten environmental groups warned about the listing causing a highly possible hindrance in the effort to reduce greenhouse gas emissions and end human rights abuses committed by the Saudi regime.On 6 November 2019, Saudi Aramco joined the World Bank's initiative to reduce gas flaring to zero by 2030. The firm reported flaring of less than 1% of its total raw gas production in the first half of 2019. Greenhouse gas emissions According to environmentalists Aramco is responsible for more than 4% of global greenhouse gas emissions since 1965. Most of this is from the use of the oil they sold, for example burning gasoline in car engines. In greenhouse gas accounting such emissions from use of a product are called "scope 3" emissions. Aramco has no plan to limit scope 3 emissions. However the government says it will have net zero carbon emissions by 2060 within the country. In October 2023, Saudi Aramco announced a direct air capture pilot program in partnership with Siemens Energy to be completed in 2024. Controversies 2007 Haradh gas pipeline explosion On 18 November 2007, reports came out that a natural gas pipeline explosion had taken the lives of several workers, the death toll was later determined to be 34. Aramco asserts it was a purely maintenance-related incident. Organizational culture On 9 December 2020, the Financial Times published an article about an engineer, whose family claims that Saudi Aramco was negligent in handling his Covid-19 infection. According to his family, he had been asking the company and authorities for help, three weeks before his death, but was simply asked to keep his gloves and mask on. Saudi Aramco did not formally contact his family until approximately 14 hours after his death, refused to release his body, and allegedly erased information from his mobile phone. His grieving family had to do without financial support for almost five months, and only received $400,000 in benefits, back pay and insurance after the Financial Times had started asking questions. In the same article, five whistleblowers accuse Aramco of bullying and mismanagement. One former employee expressed his concerns about the company’s highly dangerous failure to pressure-test valves and mains units, detailing cracks in the refinery structure and sinking of roads and foundations. The refinery runs a real risk of becoming Aramco's Piper Alpha, said another expatriate employee, who also accuses Aramco of lacking a culture of challenge, facilitating ineptitude and laziness. Treatment of workers In March 2020, Saudi Aramco came under fire after photos of a migrant worker dressed as a large hand-sanitizer dispenser went viral on social media. People on Twitter condemned the act as "modern-day slavery," "humiliating" and "dehumanizing." According to the company, the display was organized without the approval of Aramco officials. Strong-arming investors In 2019, sources told the Financial Times that wealthy families of Saudi Arabia had been coerced into joining the Saudi Aramco IPO. According to analysts, Aramco could realistically reach a market capitalization of $1 to $1.5 trillion but the Saudis wanted $2 trillion. Oddly, many of those who signed up have relatives that were part of the 2017–2019 Saudi Arabian purge. According to four sources, the Saudi government "strong-armed", "coerced" or "bullied" some of the wealthiest families in the kingdom into becoming cornerstone investors. Lobbying and research projects Saudi Aramco has funded almost 500 studies in last five years on energy issues and collaborated with the United States Department of Energy on projects to boost oil production, such as developing more efficient gasoline, enhanced oil recovery and methods to increase the flow of oil from wells. Since 2016, Saudi Arabia has spent around $140 million on lobbying to influence public opinion and policies in the US. In an effort to keep gasoline cars competitive, Saudi Aramco is working on a device which would trap some of the carbon dioxide upon attaching it to cars running on gasoline. Saudi Aramco has partnered with Hyundai to develop a fuel for gas-electric vehicles which will run on petroleum. See also References Bibliography Vitalis, Robert (2006). America's Kingdom: Mythmaking on the Saudi Oil Frontier. Stanford: Stanford University Press. ISBN 978-0-8047-5446-0. External links Official website Aramco Services Co. website (Saudi Aramco's U.S. subsidiary) A CNN report about the security of oil in Saudi Arabia (much of it is about Saudi Aramco's security) Saudi Arabia's crude oil production chart (1980–2004) (data sourced from the U.S. Department of Energy) CBS 60 Minutes (2008-12-07) "The Oil Kingdom: Part One" CBS 60 Minutes (2008-12-07) "The Oil Kingdom: Part Two"
rice
Rice is the seed of the grass species Oryza sativa (Asian rice) or, less commonly, O. glaberrima (African rice). The name wild rice is usually used for species of the genera Zizania and Porteresia, both wild and domesticated, although the term may also be used for primitive or uncultivated varieties of Oryza. As a cereal grain, domesticated rice is the most widely consumed staple food for over half of the world's human population, particularly in Asia and Africa. It is the agricultural commodity with the third-highest worldwide production, after sugarcane and maize. Since sizable portions of sugarcane and maize crops are used for purposes other than human consumption, rice is the most important food crop with regard to human nutrition and caloric intake, providing more than one-fifth of the calories consumed worldwide by humans. There are many varieties of rice, and culinary preferences tend to vary regionally. The traditional method for cultivating rice is flooding the fields while, or after, setting the young seedlings. This simple method requires sound irrigation planning, but it reduces the growth of less robust weed and pest plants that have no submerged growth state, and deters vermin. While flooding is not mandatory for the cultivation of rice, all other methods of irrigation require higher effort in weed and pest control during growth periods and a different approach for fertilizing the soil. Rice, a monocot, is normally grown as an annual plant. Rice cultivation is well-suited to countries and regions with low labor costs and high rainfall, as it is labor-intensive to cultivate and requires ample water. However, rice can be grown practically anywhere, even on a steep hill or mountain area with the use of water-controlling terrace systems. Although its parent species are native to Asia and certain parts of Africa, centuries of trade and exportation have made it commonplace in many cultures worldwide. Production and consumption of rice is estimated to have been responsible for 4% of global greenhouse gas emissions in 2010. Biology Description The rice plant can grow to 1–1.8 m (3–6 ft) tall, occasionally more depending on the variety and soil fertility. It has long, slender leaves 50–100 cm (20–40 in) long and 2–2.5 cm (3⁄4–1 in) broad. The small wind-pollinated flowers are produced in a branched arching to pendulous inflorescence 30–50 cm (12–20 in) long. The edible seed is a grain (caryopsis) 5–12 mm (3⁄16–15⁄32 in) long and 2–3 mm (3⁄32–1⁄8 in) thick. Rice is a cereal belonging to the family Poecae. As a tropical crop, it can be grown during the two distinct seasons (dry and wet) of the year provided that sufficient water is made available. It is normally an annual, but in the tropics it can survive as a perennial and can produce a ratoon crop for up to 30 years. Ecology Rice growth and production are affected by: the environment, soil properties, biotic conditions, and cultural practices. Environmental factors include rainfall and water, temperature, photoperiod, solar radiation and, in some instances, tropical storms. Soil factors include soil type and their position in uplands or lowlands. Biotic factors deal with weeds, insects, diseases, and crop varieties.Rice does not thrive if waterlogged, yet it can survive and grow in paddy fields which are regularly flooded. Rice can be grown in different environments, depending upon water availability. The usual arrangement is for lowland fields to be surrounded by bunds and flooded to a depth of a few centimetres until around a week before harvest time; this requires a large amount of water. The "alternate wetting and drying" technique, flooding the fields for around a week, then draining them for a similar period, and so on, uses less water.Deepwater rice varieties tolerate flooding to a depth of over 50 centimetres for at least a month.Upland rice is grown without flooding; it is rainfed like wheat or maize. Food Rice is commonly consumed as food around the world. The varieties of rice are typically classified as long-, medium-, and short-grained. The grains of long-grain rice (high in amylose) tend to remain intact after cooking; medium-grain rice (high in amylopectin) becomes more sticky. Medium-grain rice is used for sweet dishes, for risotto in Italy, and many rice dishes, such as arròs negre, in Spain. Some varieties of long-grain rice that are high in amylopectin, known as Thai Sticky rice, are usually steamed. A stickier short-grain rice is used for sushi; the stickiness allows rice to hold its shape when cooked. Short-grain rice is used extensively in Japan, including to accompany savoury dishes. History of cultivation Production and commerce Production In 2020, world production of paddy (unmilled) rice was 756.7 million metric tons (834.1 million short tons), led by China and India with a combined 52% of this total. Other major producers were Bangladesh, Indonesia and Vietnam. The five major producers accounted for 72% of total production, while the top fifteen producers accounted for 91% of total world production in 2017. Developing countries account for 95% of the total production.Rice is a major food staple and a mainstay for the rural population and their food security. It is mainly cultivated by small farmers in holdings of less than one hectare. Rice is also a wage commodity for workers in the cash crop or non-agricultural sectors. Rice is vital for the nutrition of much of the population in Asia, as well as in Latin America and the Caribbean and in Africa; it is central to the food security of over half the world population. Many rice grain producing countries have significant losses post-harvest at the farm and because of poor roads, inadequate storage technologies, inefficient supply chains and farmer's inability to bring the produce into retail markets dominated by small shopkeepers. A World Bank – FAO study claims 8% to 26% of rice is lost in developing nations, on average, every year, because of post-harvest problems and poor infrastructure. Some sources claim the post-harvest losses exceed 40%. Not only do these losses reduce food security in the world, the study claims that farmers in developing countries such as China, India and others lose approximately US$89 billion of income in preventable post-harvest farm losses, poor transport, the lack of proper storage and retail. One study claims that if these post-harvest grain losses could be eliminated with better infrastructure and retail network, in India alone enough food would be saved every year to feed 70 to 100 million people. Processing The seeds of the rice plant are first milled using a rice huller to remove the chaff (the outer husks of the grain) (see: rice hulls). At this point in the process, the product is called brown rice. The milling may be continued, removing the bran, i.e., the rest of the husk and the germ, thereby creating white rice. White rice, which keeps longer, lacks some important nutrients; moreover, in a limited diet which does not supplement the rice, brown rice helps to prevent the disease beriberi.Either by hand or in a rice polisher, white rice may be buffed with glucose or talc powder (often called polished rice, though this term may also refer to white rice in general), parboiled, or processed into flour. White rice may also be enriched by adding nutrients, especially those lost during the milling process. While the cheapest method of enriching involves adding a powdered blend of nutrients that will easily wash off (in the United States, rice which has been so treated requires a label warning against rinsing), more sophisticated methods apply nutrients directly to the grain, coating the grain with a water-insoluble substance which is resistant to washing.In some countries, a popular form, parboiled rice (also known as converted rice and easy-cook rice) is subjected to a steaming or parboiling process while still a brown rice grain. The parboil process causes a gelatinisation of the starch in the grains. The grains become less brittle, and the color of the milled grain changes from white to yellow. The rice is then dried, and can then be milled as usual or used as brown rice. Milled parboiled rice is nutritionally superior to standard milled rice, because the process causes nutrients from the outer husk (especially thiamine) to move into the endosperm, so that less is subsequently lost when the husk is polished off during milling. Parboiled rice has an additional benefit in that it does not stick to the pan during cooking, as happens when cooking regular white rice. This type of rice is eaten in parts of India and countries of West Africa that are accustomed to consuming parboiled rice. In its raw state, dried rice has an indefinite shelf life.Rice bran, called nuka in Japan, is a valuable commodity in Asia and is used for many daily needs. It is a moist, oily inner layer which is heated to produce oil. It is also used as a pickling bed in making rice bran pickles and takuan. Raw rice may be ground into flour for many uses, including making many kinds of beverages, such as amazake, horchata, rice milk, and rice wine. Rice does not contain gluten, so is suitable for people on a gluten-free diet. Rice can be made into various types of noodles. Raw, wild, or brown rice may also be consumed by raw-foodist or fruitarians if soaked and sprouted (usually a week to 30 days – gaba rice).Processed rice seeds must be boiled or steamed before eating. Boiled rice may be further fried in cooking oil or butter (known as fried rice), or beaten in a tub to make mochi.Rice is a good source of protein and a staple food in many parts of the world, but it is not a complete protein: it does not contain all of the essential amino acids in sufficient amounts for good health, and should be combined with other sources of protein, such as nuts, seeds, beans, fish, or meat.Rice, like other cereal grains, can be puffed (or popped). This process takes advantage of the grains' water content and typically involves heating grains in a special chamber. Further puffing is sometimes accomplished by processing puffed pellets in a low-pressure chamber. The ideal gas law means either lowering the local pressure or raising the water temperature results in an increase in volume prior to water evaporation, resulting in a puffy texture. Bulk raw rice density is about 0.9 g/cm3. It decreases to less than one-tenth that when puffed. Harvesting, drying and milling Unmilled rice, known as "paddy" (Indonesia and Malaysia: padi; Philippines, palay), is usually harvested when the grains have a moisture content of around 25%. In most Asian countries, where rice is almost entirely the product of smallholder agriculture, harvesting is carried out manually, although there is a growing interest in mechanical harvesting. Harvesting can be carried out by the farmers themselves, but is also frequently done by seasonal labor groups. Harvesting is followed by threshing, either immediately or within a day or two. Again, much threshing is still carried out by hand but there is an increasing use of mechanical threshers. Subsequently, paddy needs to be dried to bring down the moisture content to no more than 20% for milling. A familiar sight in several Asian countries is paddy laid out to dry along roads. However, in most countries the bulk of drying of marketed paddy takes place in mills, with village-level drying being used for paddy to be consumed by farm families. Mills either sun dry or use mechanical driers or both. Drying has to be carried out quickly to avoid the formation of molds. Mills range from simple hullers, with a throughput of a couple of tonnes a day, that simply remove the outer husk, to enormous operations that can process 4 thousand metric tons (4.4 thousand short tons) a day and produce highly polished rice. A good mill can achieve a paddy-to-rice conversion rate of up to 72% but smaller, inefficient mills often struggle to achieve 60%. These smaller mills often do not buy paddy and sell rice but only service farmers who want to mill their paddy for their own consumption. Distribution Because of the importance of rice to human nutrition and food security in Asia, the domestic rice markets tend to be subject to considerable state involvement. While the private sector plays a leading role in most countries, agencies such as BULOG in Indonesia, the NFA in the Philippines, VINAFOOD in Vietnam and the Food Corporation of India are all heavily involved in purchasing of paddy from farmers or rice from mills and in distributing rice to poorer people. BULOG monopolises rice imports into Indonesia while VINAFOOD controls all exports from Vietnam. Trade World trade figures are much smaller than those for production, as less than 8% of rice produced is traded internationally. Developing countries are the main players in the world rice trade, accounting for 83% of exports and 85% of imports. While there are numerous importers of rice, the exporters of rice are limited. Just five countries—Thailand, Vietnam, China, the United States and India—in decreasing order of exported quantities, accounted for about three-quarters of world rice exports in 2002. However, this ranking has been rapidly changing in recent years. In 2010, the three largest exporters of rice, in decreasing order of quantity exported were Thailand, Vietnam and India. By 2012, India became the largest exporter of rice with a 100% increase in its exports on year-to-year basis, and Thailand slipped to third position. Together, Thailand, Vietnam and India accounted for nearly 70% of the world rice exports.The primary variety exported by Thailand and Vietnam were Jasmine rice, while exports from India included aromatic Basmati variety. China, an exporter of rice in the early 2000s, had become a net importer of rice by 2010. According to a USDA report, the world's largest exporters of rice in 2012 were India (9.75 million metric tons (10.75 million short tons)), Vietnam (7 million metric tons (7.7 million short tons)), Thailand (6.5 million metric tons (7.2 million short tons)), Pakistan (3.75 million metric tons (4.13 million short tons)) and the United States (3.5 million metric tons (3.9 million short tons)).Major importers include Nigeria, Indonesia, Bangladesh, Saudi Arabia, Iran, Iraq, Malaysia, the Philippines, Brazil and some African and Persian Gulf countries. In common with other West African countries, Nigeria is actively promoting domestic production. However, its heavy import duties (110%) open it to smuggling. Yield records The average world yield for rice was 4.3 metric tons per hectare (1.9 short tons per acre), in 2010. Australian rice farms were the most productive in 2010, with a nationwide average of 10.8 metric tons per hectare (4.8 short tons per acre).Yuan Longping of China National Hybrid Rice Research and Development Center set a world record for rice yield in 2010 at 19 metric tons per hectare (8.5 short tons per acre) on a demonstration plot. In 2011, this record was reportedly surpassed by an Indian farmer, Sumant Kumar, with 22.4 metric tons per hectare (10.0 short tons per acre) in Bihar, although this claim has been disputed by both Yuan and India's Central Rice Research Institute. These efforts employed newly developed rice breeds and System of Rice Intensification (SRI), a recent innovation in rice farming. Worldwide consumption As of 2013, world food consumption of rice was 565.6 million metric tons (623.5 million short tons) of paddy equivalent (377,283 metric tons (415,883 short tons) of milled equivalent), while the largest consumers were China consuming 162.4 million metric tons (179.0 million short tons) of paddy equivalent (28.7% of world consumption) and India consuming 130.4 million metric tons (143.7 million short tons) of paddy equivalent (23.1% of world consumption).Between 1961 and 2002, per capita consumption of rice increased by 40% worldwide. Rice is the most important crop in Asia. In Cambodia, for example, 90% of the total agricultural area is used for rice production. Per capita, Bangladesh ranks as the country with the highest rice consumption, followed by Laos, Cambodia, Vietnam and Indonesia.U.S. rice consumption rose sharply around the start of the 21st century, fueled in part by commercial applications such as beer production. Almost one in five adult Americans now report eating at least half a serving of white or brown rice per day. Environmental impacts Climate change The worldwide production of rice accounts for more greenhouse gas emissions (GHG) in total than that of any other plant food. It was estimated in 2021 to be responsible for 30% of agricultural methane emissions and 11% of agricultural nitrous oxide emissions. Methane release is caused by long-term flooding of rice fields, inhibiting the soil from absorbing atmospheric oxygen, a process causing anaerobic fermentation of organic matter in the soil. A 2021 study estimated that rice contributed 2 billion tonnes of anthropogenic greenhouse gases in 2010, of the 47 billion total. The study added up GHG emissions from the entire lifecycle, including production, transportation, and consumption, and compared the global totals of different foods. The total for rice was half the total for beef.A 2010 study found that, as a result of rising temperatures and decreasing solar radiation during the later years of the 20th century, the rice yield growth rate has decreased in many parts of Asia, compared to what would have been observed had the temperature and solar radiation trends not occurred. The yield growth rate had fallen 10–20% at some locations. The study was based on records from 227 farms in Thailand, Vietnam, Nepal, India, China, Bangladesh, and Pakistan. The mechanism of this falling yield was not clear, but might involve increased respiration during warm nights, which expends energy without being able to photosynthesize. More detailed analysis of rice yields by the International Rice Research Institute forecast 20% reduction in yields in Asia per degree Celsius of temperature rise. Rice becomes sterile if exposed to temperatures above 35 °C (95 °F) for more than one hour during flowering and consequently produces no grain. Water usage Rice requires slightly more water to produce than other grains. Rice production uses almost a third of Earth's fresh water. Water outflows from rice fields through transpiration, evaporation, seepage, and percolation. It is estimated that it takes about 2,500 litres (660 US gal) of water need to be supplied to account for all of these outflows and produce 1 kilogram (2 lb 3 oz) of rice. Pests, weeds, and diseases Rice pests are animals which have the potential to reduce the yield or value of the rice crop (or of rice seeds); plants are described as weeds, while microbes are described as pathogens, disease-causing organisms. Rice pests include insects, nematodes, rodents, and birds. A variety of factors can contribute to pest outbreaks, including climatic factors, improper irrigation, the overuse of insecticides and high rates of nitrogen fertilizer application. Weather conditions also contribute to pest outbreaks. For example, rice gall midge and army worm outbreaks tend to follow periods of high rainfall early in the wet season, while thrips outbreaks are associated with drought. Pests and weeds Major rice insect pests include: the brown planthopper (BPH), several species of stemborers—including those in the genera Scirpophaga and Chilo, the rice gall midge, several species of rice bugs, notably in the genus Leptocorisa, defoliators such as the rice: leafroller, hispa and grasshoppers. The fall army worm, a species of Lepidoptera, also targets and causes damage to rice crops. Rice weevils attack stored produce. Several nematode species infect rice crops, causing diseases such as ufra (Ditylenchus dipsaci), white tip disease (Aphelenchoide bessei), and root knot disease (Meloidogyne graminicola). Some nematode species such as Pratylenchus spp. are most damaging to upland rice of all parts of the world. Rice root nematode (Hirschmanniella oryzae) is a migratory endoparasite which on higher inoculum levels leads to complete destruction of a rice crop. Beyond being obligate parasites, they decrease the vigor of plants and increase the plants' susceptibility to other pests and diseases.Other pests include the apple snail (Pomacea canaliculata), panicle rice mite, rats, and the weed Echinochloa crus-galli.Rice is parasitized by the hemiparasitic eudicot weed Striga hermonthica, which is of local importance for this crop. Diseases Rice blast, caused by the fungus Magnaporthe grisea (syn. M. oryzae, Pyricularia oryzae), is the most significant disease affecting rice cultivation. It and bacterial leaf streak (caused by Xanthomonas oryzae pv. oryzae) are perennially the two worst rice diseases worldwide; they are both among the worst 10 diseases of all plants. A 2010 review reported clones for quantitative disease resistance in plants. The plant responds to the blast pathogen by releasing jasmonic acid, which then cascades into the activation of further downstream metabolic pathways which produce the defense response. This accumulates as methyl-jasmonic acid. The pathogen responds by synthesizing an oxidizing enzyme which prevents this accumlation and its resulting alarm signal. OsPii-2 was discovered by Fujisaki et al., 2017. It is a nucleotide-binding leucine-rich repeat receptor (NB-LRR, NLR), an immunoreceptor. It includes an NOI domain (NO3-induced) which binds rice's own Exo70-F3 protein. This protein is a target of the M. oryzae effector AVR-Pii and so this allows the NLR to monitor for Mo's attack against that target. Some rice cultivars carry resistance alleles of the OsSWEET13 gene, which produces the molecular target of the X. oryzae pv. oryzae effector PthXo2.Other major fungal and bacterial rice diseases include sheath blight (caused by Rhizoctonia solani), false smut (Ustilaginoidea virens), bacterial panicle blight (Burkholderia glumae), sheath rot (Sarocladium oryzae), and bakanae (Fusarium fujikuroi). Viral diseases exist, such as rice ragged stunt (vector: BPH), and tungro (vector: Nephotettix spp). Many viral diseases, especially those vectored by planthoppers and leafhoppers, are major causes of losses across the world. There is also an ascomycete fungus, Cochliobolus miyabeanus, that causes brown spot disease in rice. Integrated pest management Crop protection scientists are trying to develop rice pest management techniques which are sustainable. In other words, to manage crop pests in such a manner that future crop production is not threatened. Sustainable pest management is based on four principles: biodiversity, host plant resistance, landscape ecology, and hierarchies in a landscape—from biological to social. At present, rice pest management includes cultural techniques, pest-resistant rice varieties, and pesticides (which include insecticide). Increasingly, there is evidence that farmers' pesticide applications are often unnecessary, and even facilitate pest outbreaks. By reducing the populations of natural enemies of rice pests, misuse of insecticides can actually lead to pest outbreaks. The International Rice Research Institute (IRRI) demonstrated in 1993 that an 87.5% reduction in pesticide use can lead to an overall drop in pest numbers. IRRI also conducted two campaigns in 1994 and 2003, respectively, which discouraged insecticide misuse and smarter pest management in Vietnam.Rice plants produce their own chemical defenses to protect themselves from pest attacks. Some synthetic chemicals, such as the herbicide 2,4-D, cause the plant to increase the production of certain defensive chemicals and thereby increase the plant's resistance to some types of pests. Conversely, other chemicals, such as the insecticide imidacloprid, can induce changes in the gene expression of the rice that cause the plant to become more susceptible to attacks by certain types of pests. 5-Alkylresorcinols are chemicals that can also be found in rice. Botanicals, so-called "natural pesticides", are used by some farmers in an attempt to control rice pests. Botanicals include extracts of leaves, or a mulch of the leaves themselves. Some upland rice farmers in Cambodia spread chopped leaves of the bitter bush (Chromolaena odorata) over the surface of fields after planting. This practice probably helps the soil retain moisture and thereby facilitates seed germination. Farmers also claim the leaves are a natural fertilizer and helps suppress weed and insect infestations.Among rice cultivars, there are differences in the responses to, and recovery from, pest damage. Many rice varieties have been selected for resistance to insect pests. Therefore, particular cultivars are recommended for areas prone to certain pest problems. The genetically based ability of a rice variety to withstand pest attacks is called resistance. Three main types of plant resistance to pests are recognized as nonpreference, antibiosis, and tolerance. Nonpreference (or antixenosis) describes host plants which insects prefer to avoid; antibiosis is where insect survival is reduced after the ingestion of host tissue; and tolerance is the capacity of a plant to produce high yield or retain high quality despite insect infestation.Over time, the use of pest-resistant rice varieties selects for pests that are able to overcome these mechanisms of resistance. When a rice variety is no longer able to resist pest infestations, resistance is said to have broken down. Rice varieties that can be widely grown for many years in the presence of pests and retain their ability to withstand the pests are said to have durable resistance. Mutants of popular rice varieties are regularly screened by plant breeders to discover new sources of durable resistance. Ecotypes and cultivars While most rice is bred for crop quality and productivity, there are varieties selected for characteristics such as texture, smell, and firmness. There are four major categories of rice worldwide: indica, japonica, aromatic and glutinous. The different varieties of rice are not considered interchangeable, either in food preparation or agriculture, so as a result, each major variety is a completely separate market from other varieties. It is common for one variety of rice to rise in price while another one drops in price.Rice cultivars also fall into groups according to environmental conditions, season of planting, and season of harvest, called ecotypes. Some major groups are the Japan-type (grown in Japan), "buly" and "tjereh" types (Indonesia); sali (or aman—main winter crop), ahu (also aush or ghariya, summer), and boro (spring) (Bengal and Assam). Cultivars exist that are adapted to deep flooding, and these are generally called "floating rice".The largest collection of rice cultivars is at the International Rice Research Institute in the Philippines, with over 100,000 rice accessions held in the International Rice Genebank. Rice cultivars are often classified by their grain shapes and texture. For example, much of southeast Asia grows sticky or glutinous rice (which means sticky, not high in gluten). Thai Jasmine rice is long-grain and relatively less sticky, as some long-grain rice contains less amylopectin than short-grain cultivars. Japanese mochi rice and Chinese sticky rice are short-grain. Indian rice cultivars include long-grained and aromatic Basmati (ਬਾਸਮਤੀ) (grown in the North), long and medium-grained Patna rice, and in South India (Andhra Pradesh and Karnataka) short-grained Sona Masuri (also called as Bangaru theegalu). In the state of Tamil Nadu, the most prized cultivar is ponni which is primarily grown in the delta regions of the Kaveri River. Kaveri is also referred to as ponni in the South and the name reflects the geographic region where it is grown. In the Western Indian state of Maharashtra, a short grain variety called Ambemohar is very popular. This rice has a characteristic fragrance of Mango blossom. Aromatic rices have definite aromas and flavors; the most noted cultivars are Thai fragrant rice, Basmati, Patna rice, Vietnamese fragrant rice, and a hybrid cultivar from America, sold under the trade name Texmati. Both Basmati and Texmati have a mild popcorn-like aroma and flavor. In Indonesia, there are also red and black cultivars. High-yield cultivars of rice suitable for cultivation in Africa and other dry ecosystems, called the new rice for Africa (NERICA) cultivars, have been developed to improve food security in West Africa.Draft genomes for the two most common rice cultivars, indica and japonica, were published in April 2002. Rice was chosen as a model organism for the biology of grasses because of its relatively small genome (~430 megabase pairs). Rice was the first crop with a complete genome sequence. Biotechnology High-yielding varieties The high-yielding varieties are a group of crops created intentionally during the Green Revolution to increase global food production. This project enabled labor markets in Asia to shift away from agriculture, and into industrial sectors. The first "Rice Car", IR8 was produced in 1966 at the International Rice Research Institute which is based in the Philippines at the University of the Philippines' Los Baños site. IR8 was created through a cross between an Indonesian variety named "Peta" and a Chinese variety named "Dee Geo Woo Gen."Scientists have identified and cloned many genes involved in the gibberellin signaling pathway, including GAI1 (Gibberellin Insensitive) and SLR1 (Slender Rice). Disruption of gibberellin signaling can lead to significantly reduced stem growth leading to a dwarf phenotype. Photosynthetic investment in the stem is reduced dramatically as the shorter plants are inherently more stable mechanically. Assimilates become redirected to grain production, amplifying in particular the effect of chemical fertilizers on commercial yield. In the presence of nitrogen fertilizers, and intensive crop management, these varieties increase their yield two to three times. Golden rice Expression of human proteins Ventria Bioscience has genetically modified rice to express lactoferrin, lysozyme which are proteins usually found in breast milk, and human serum albumin, These proteins have antiviral, antibacterial, and antifungal effects.Rice containing these added proteins can be used as a component in oral rehydration solutions which are used to treat diarrheal diseases, thereby shortening their duration and reducing recurrence. Such supplements may also help reverse anemia. Flood-tolerant rice Due to the varying levels that water can reach in regions of cultivation, flood tolerant varieties have long been developed and used. Flooding is an issue that many rice growers face, especially in South and South East Asia where flooding annually affects 20 million hectares (49 million acres). Flooding has historically led to massive losses in yields, such as in the Philippines, where in 2006, rice crops worth $65 million were lost to flooding. Standard rice varieties cannot withstand stagnant flooding for more than about a week, since it disallows the plant access to necessary requirements such as sunlight and gas exchange. Drought-tolerant rice Drought represents a significant environmental stress for rice production, with 19–23 million hectares (47–57 million acres) of rainfed rice production in South and South East Asia often at risk. Under drought conditions, without sufficient water to afford them the ability to obtain the required levels of nutrients from the soil, conventional commercial rice varieties can be severely affected—for example, yield losses as high as 40% have affected some parts of India, with resulting losses of around US$800 million annually.The International Rice Research Institute conducts research into developing drought-tolerant rice varieties, including the varieties 5411 and Sookha dhan, currently being employed by farmers in the Philippines and Nepal respectively. In addition, in 2013 the Japanese National Institute for Agrobiological Sciences led a team which successfully inserted the DEEPER ROOTING 1 (DRO1) gene, from the Philippine upland rice variety Kinandang Patong, into the popular commercial rice variety IR64, giving rise to a far deeper root system in the resulting plants. This facilitates an improved ability for the rice plant to derive its required nutrients in times of drought via accessing deeper layers of soil, a feature demonstrated by trials which saw the IR64 + DRO1 rice yields drop by 10% under moderate drought conditions, compared to 60% for the unmodified IR64 variety. Salt-tolerant rice Soil salinity poses a major threat to rice crop productivity, particularly along low-lying coastal areas during the dry season. For example, roughly 1 million hectares (2.5 million acres) of the coastal areas of Bangladesh are affected by saline soils. These high concentrations of salt can severely affect rice plants' normal physiology, especially during early stages of growth, and as such farmers are often forced to abandon these otherwise potentially usable areas.Progress has been made, however, in developing rice varieties capable of tolerating such conditions; the hybrid created from the cross between the commercial rice variety IR56 and the wild rice species Oryza coarctata is one example. O. coarctata is capable of successful growth in soils with double the limit of salinity of normal varieties, but lacks the ability to produce edible rice. Developed by the International Rice Research Institute, the hybrid variety can utilise specialised leaf glands that allow for the removal of salt into the atmosphere. It was initially produced from one successful embryo out of 34,000 crosses between the two species; this was then backcrossed to IR56 with the aim of preserving the genes responsible for salt tolerance that were inherited from O. coarctata. Extensive trials are planned prior to the new variety being available to farmers by approximately 2017–18. Environment-friendly rice Producing rice in paddies is harmful for the environment due to the release of methane by methanogenic bacteria. These bacteria live in the anaerobic waterlogged soil, and live off nutrients released by rice roots. Researchers have recently reported in Nature that putting the barley gene SUSIBA2 into rice creates a shift in biomass production from root to shoot (above ground tissue becomes larger, while below ground tissue is reduced), decreasing the methanogen population, and resulting in a reduction of methane emissions of up to 97%. Apart from this environmental benefit, the modification also increases the amount of rice grains by 43%, which makes it a useful tool in feeding a growing world population. Model organism Rice is used as a model organism for investigating the molecular mechanisms of meiosis and DNA repair in higher plants. Meiosis is a key stage of the sexual cycle in which diploid cells in the ovule (female structure) and the anther (male structure) produce haploid cells that develop further into gametophytes and gametes. So far, 28 meiotic genes of rice have been characterized. Studies of rice gene OsRAD51C showed that this gene is necessary for homologous recombinational repair of DNA, particularly the accurate repair of DNA double-strand breaks during meiosis. Rice gene OsDMC1 was found to be essential for pairing of homologous chromosomes during meiosis, and rice gene OsMRE11 was found to be required for both synapsis of homologous chromosomes and repair of double-strand breaks during meiosis. In human culture Rice plays an important role in certain religions and popular beliefs. In many cultures, relatives scatter rice over the bride and groom in a wedding ceremony. In Malay weddings, rice features in multiple special wedding foods such as "sweet glutinous rice, buttered rice, [and] yellow glutinous rice". The pounded rice ritual is conducted during weddings in Nepal. The bride gives a leafplate full of pounded rice to the groom after he requests it politely from her. In the Philippines rice wine, popularly known as tapuy, is used for important occasions such as weddings, rice harvesting ceremonies and other celebrations.Dewi Sri is the traditional rice goddess of the Javanese, Sundanese, and Balinese people in Indonesia. Most rituals involving Dewi Sri are associated with the mythical origin attributed to the rice plant, the staple food of the region.A 2014 study of Han Chinese communities found that a history of farming rice makes cultures more psychologically interdependent, whereas a history of farming wheat makes cultures more independent.A Royal Ploughing Ceremony is held in certain Asian countries to mark the beginning of the rice planting season. It is still honored in the kingdoms of Cambodia and Thailand. The 2,600-year-old tradition – begun by Śuddhodana in Kapilavastu – was revived in the republic of Nepal in 2017 after a lapse of a few years.The Thai kings have patronised rice breeding since at least the reign of Chulalongkorn, and his great-great-grandson Vajiralongkorn released five particular rice varieties to celebrate his coronation. See also References Further reading Liu, Wende; Liu, Jinling; Triplett, Lindsay; Leach, Jan E.; Wang, Guo-Liang (August 4, 2014). "Novel insights into rice innate immunity against bacterial and fungal pathogens". Annual Review of Phytopathology. Annual Reviews. 52 (1): 213–241. doi:10.1146/annurev-phyto-102313-045926. PMID 24906128. S2CID 9244874. Deb, D. (October 2019). "Restoring Rice Biodiversity". Scientific American. 321 (4): 54–61. India originally possessed some 110,000 landraces of rice with diverse and valuable properties. These include enrichment in vital nutrients and the ability to withstand flood, drought, salinity or pest infestations. The Green Revolution covered fields with a few high-yielding varieties, so that roughly 90 percent of the landraces vanished from farmers' collections. High-yielding varieties require expensive inputs. They perform abysmally on marginal farms or in adverse environmental conditions, forcing poor farmers into debt. Singh, B.N. (2018). Global Rice Cultivation & Cultivars. New Delhi: Studium Press. ISBN 978-1-62699-107-1. Archived from the original on March 14, 2018. Retrieved March 14, 2018.
methane leak
A methane leak comes from an industrial facility or pipeline and means a significant natural gas leak: the term is used for a class of methane emissions. Satellite data enables the identification of super-emitter events that produce methane plumes. Over 1,000 methane leaks of this type were found worldwide in 2022. As with other gas leaks, a leak of methane is a safety hazard: coalbed methane in the form of fugitive gas emission has always been a danger to miners. Methane leaks also have a serious environmental impact. Natural gas can contain some ethane and other gases, but from both the safety and environmental point of view the methane content is the major factor. As a greenhouse gas and climate change contributor, methane ranks second, following carbon dioxide. Fossil fuel exploration, transportation and production is responsible for about 40% of human-caused methane emissions. Smaller leaks than can be spotted from space comprise a long tail of emissions. They can be identified from planes flying at 900 meters (3,000 ft). According to Fatih Birol of the International Energy Agency, "Methane emissions are still far too high, especially as methane cuts are among the cheapest options to limit near-term global warming". Examples of methane leaks Individual methane leaks as reported are specific events, with a large quantity of gas released. An example followed the 2022 Nord Stream pipeline sabotage. Following early reports that the escape might exceed 105 tonnes, The International Methane Emissions Observatory of the United Nations Environment Programme analysed the release. In February 2023 it put the mass of methane gas in the range 7.5 to 23.0 x 104 tonnes. In terms of overall human-made methane emissions, these figures are under 0.1% of the annual total.Satellite data detection has shown that methane super emitter sites in Turkmenistan, USA and Russia are responsible for the biggest number of events from fossil fuel facilities. Equipment failures are normally responsible for the releases, which can last for weeks.The Aliso Canyon gas leak of 2015 has been quantified as at least 1.09 x 105 tonnes of methane. Satellite data for the Raspadskaya coal mine, Kemerovo Oblast, Russia indicated in 2022 an hourly methane leakage rate of 87 tonnes; this compares to 60 tonnes per hour of natural gas leaking from the Aliso Canyon incident, considered among the worst recorded leak events.Spain's Technical University of Valencia, in a study published in 2022, found that a super emitter event at a gas and oil platform in the Gulf of Mexico released around 4 x 104 tonnes of methane during a 17-day time period in December 2021 (hourly rate around 98 tonnes). Another major event in 2022 was a leak of 427 tonnes an hour in August, near Turkmenistan's Caspian coast and a major pipeline. Units Quantitative reports of methane leaks often use the standard cubic foot (scf) of the United States customary system. Applied to natural gas, a complex mixture of uncertain proportions, and depending on pressure and temperature conditions, the accuracy of calculations converting scf to metric units of mass is subject to limitations. A conversion figure given is 5 x 104 scf of natural gas as 1.32 short tons (1.20 t).For detection sensitivity, quantitative criteria are typically stated in units of standard cubic feet per hour (scf/h, "skiff", US), or thousand standard cubic feet per day (Mscf/d); or with metric units kilograms per hour (kg/hr), cubic meters per day (m3/d).To describe the mass balance of methane in the atmosphere, mass rates are described in units of Tg/yr, i.e. teragrams per year where a teragram is 106 tonnes (megagrams). The methane leak from the Permian Basin, a significant region of the Mid-Continent Oil Producing Area, was estimated for 2018/9 from satellite data as 2.7 Tg/yr. Quoted in terms of the proportion of the mass of extracted gas, the leakage comes to 3.7%. The 2021 Carbon Mapper project, a collaboration of the Jet Propulsion Laboratory and academia, detected 533 methane super-emitters in the Permian Basin. == References ==
vehicle emission standard
Emission standards are the legal requirements governing air pollutants released into the atmosphere. Emission standards set quantitative limits on the permissible amount of specific air pollutants that may be released from specific sources over specific timeframes. They are generally designed to achieve air quality standards and to protect human life. Different regions and countries have different standards for vehicle emissions. Regulated sources Many emissions standards focus on regulating pollutants released by automobiles (motor cars) and other powered vehicles. Others regulate emissions from industry, power plants, small equipment such as lawn mowers and diesel generators, and other sources of air pollution. The first automobile emissions standards were enacted in 1963 in the United States, mainly as a response to Los Angeles' smog problems. Three years later Japan enacted their first emissions rules, followed between 1970 and 1972 by Canada, Australia, and several European nations. The early standards mainly concerned carbon monoxide (CO) and hydrocarbons (HC). Regulations on nitrogen oxide emissions (NOx) were introduced in the United States, Japan, and Canada in 1973 and 1974, with Sweden following in 1976 and the European Economic Community in 1977. These standards gradually grew more and more stringent but have never been unified.There are largely three main sets of standards: United States, Japanese, and European, with various markets mostly using these as their base. Sweden, Switzerland, and Australia had separate emissions standards for many years but have since adopted the European standards. India, China, and other newer markets have also begun enforcing vehicle emissions standards (derived from the European requirements) in the twenty-first century, as growing vehicle fleets have given rise to severe air quality problems there, too. Vehicle emission performance standard An emission performance standard is a limit that sets thresholds above which a different type of vehicle emissions control technology might be needed. While emission performance standards have been used to dictate limits for conventional pollutants such as oxides of nitrogen and oxides of sulphur (NOx and SOx), this regulatory technique may be used to regulate greenhouse gases, particularly carbon dioxide (CO2). In the US, this is given in pounds of carbon dioxide per megawatt-hour (lbs. CO2/MWhr), and kilograms CO2/MWhr elsewhere. Europe Before the European Union began streamlining emissions standards, there were several different sets of rules. Members of the European Economic Community (EEC) had a unified set of rules, considerably laxer than those of the United States or Japan. These were tightened gradually, beginning on cars of over two liters displacement as the price increase would have less of an impact in this segment. The ECE 15/05 norms (also known as the Luxemburg accord, strict enough to essentially require catalytic converters) began taking effect gradually: the initial step applied to cars of over 2000 cc in two stages, in October 1988 and October 1989. There followed cars between 1.4 and 2.0 liters, in October 1991 and then October 1993. Cars of under 1400 cc had to meet two subsequent sets of regulations that applied in October 1992 and October 1994 respectively. French and Italian car manufacturers, strongly represented in the small car category, had been lobbying heavily against these regulations throughout the 1980s.Within the EEC, Germany was a leader in regulating automobile emissions. Germany gave financial incentives to buyers of cars that met US or ECE standards, with lesser credits available to those that partially fulfilled the requirements. These incentives had a strong impact; only 6.5 percent of new cars registered in Germany in 1988 did not meet any emissions requirements and 67.3 percent were compliant with the strictest US or ECE standards.Sweden was one of the first countries to instill stricter rules (for 1975), placing severe limitations on the number of vehicles available there. These standards also caused drivability problems and steeply increased fuel consumption - in part because manufacturers could not justify the expenditure to meet specific regulations that applied only in one very small market. In 1982, the European Community calculated that the Swedish standards increased fuel consumption by 9 percent, while it made cars 2.5 percent more expensive. For 1983 Switzerland (and then Australia) joined in the same set of regulations, which gradually increased the number of certified engines. One problem with the strict standards was that they did not account for catalyzed engines, meaning that vehicles thus equipped had to have the catalytic converters removed before they could be legally registered. In 1985 the first catalyzed cars entered certain European markets such as Germany. At first, the availability of unleaded petrol was limited and sales were small. In Sweden, catalyzed vehicles became allowed in 1987, benefitting from a tax rebate to boost sales. By 1989 the Swiss/Swedish emissions rules were tightened to the point that non-catalyzed cars were no longer able to be sold. In early 1989 the BMW Z1 was introduced, only available with catalyzed engines. This was a problem in some places like Portugal, where unleaded fuel was still almost non-existent, although European standards required unleaded gasoline to be "available" in every country by 1 October 1989. European Union The main source of greenhouse gas emissions in the European Union is transportation. In 2019, it contributes to about 31% of global emissions and 24% of emissions in the EU. In addition, up to the COVID-19 pandemic, emissions have only increased in the transport economic sector. In 2019, about 95% of the fuel came from fossil sources.The European Union has its own set of emissions standards that all new vehicles must meet. Currently, standards are set for all road vehicles, trains, barges and 'nonroad mobile machinery' (such as tractors). No standards apply to seagoing ships or airplanes. EU Regulation No 443/2009 set an average CO2 emissions target for new passenger cars of 130 grams per kilometre. The target was gradually phased in between 2012 and 2015. A target of 95 grams per kilometre applies from 2021. For light commercial vehicle, an emissions target of 175 g/km applies from 2017, and 147 g/km from 2020, a reduction of 16%. The EU introduced Euro 4 effective 1 January 2008, Euro 5 effective 1 January 2010, and Euro 6 effective 1 January 2014. These dates had been postponed for two years to give oil refineries the opportunity to modernize their plants. From January 2022, all new light vehicles must comply with Euro 6d. From 1 January 2023, all new motorcycles must comply with Euro 5. Germany According to the German federal automotive office 37.3% (15.4 million) cars in Germany (total car population 41.3 million) conform to the Euro 4 standard from Jan 2009. Russia All new light vehicles must comply with Euro 5 since January 2016. All new heavy vehicles must comply with Euro 5 since 2018. UK Several local authorities in the UK have introduced Euro 4 or Euro 5 emissions standards for taxis and licensed private hire vehicles to operate in their area. Emissions tests on diesel cars have not been carried out during MOTs in Northern Ireland for 12 years, despite being legally required.From January 2022, all new light vehicles must comply with Euro 6d. From 1 January 2023, all new motorcycles must comply with Euro 5. North America Canada In Canada, the Canadian Environmental Protection Act, 1999 (CEPA 1999) transfers the legislative authority for regulating emissions from on-road vehicles and engines to Environment Canada from Transport Canada's Motor Vehicle Safety Act. The Regulations align emission standards with the U.S. federal standards and apply to light-duty vehicles (e.g., passenger cars), light-duty trucks (e.g., vans, pickup trucks, sport utility vehicles), heavy-duty vehicles (e.g., trucks and buses), heavy-duty engines and motorcycles. Mexico From 1 July 2019, all new heavy vehicles must comply with EPA 07 and Euro 5. From 1 January 2025, all new heavy vehicles must comply with EPA 10 and Euro 6. United States The United States has its own set of emissions standards that all new vehicles must meet. In the United States, emissions standards are managed by the Environmental Protection Agency (EPA). In 2014, the EPA published its "Tier 3" standards for cars, trucks and other motor vehicles, which tightened air pollution emission requirements and lowered the sulfur content in gasoline.EPA has separate regulations for small engines, such as groundskeeping equipment. The states must also promulgate miscellaneous emissions regulations in order to comply with the National Ambient Air Quality Standards.In December 2021 EPA issued new greenhouse gas standards for passenger cars and light trucks, effective for the 2023 vehicle model year. State-level standards Under federal law, the state of California is allowed to promulgate more stringent vehicle emissions standards (subject to EPA approval), and other states may choose to follow either the national or California standards. California had produced air quality standards prior to EPA, with severe air quality problems in the Los Angeles metropolitan area. LA is the country's second-largest city, by population, and relies much more heavily on automobiles and has less favorable meteorological conditions than the largest and third-largest cities (New York and Chicago).Some states have areas within the state that require emissions testing while other cities within the state do not require emission testing. Arizona emissions testing locations are located primarily in the two largest metropolitan areas (Phoenix and Tucson). People outside of these areas are not required to submit their vehicle for testing as these areas are the only ones that have failed the air quality tests by the state. California's emissions standards are set by the California Air Resources Board (CARB). By mid-2009, 16 other states had adopted CARB rules; given the size of the California market plus these other states, many manufacturers choose to build to the CARB standard when selling in all 50 states. CARB's policies have also influenced EU emissions standards.California is attempting to regulate greenhouse gas emissions from automobiles, but faces a court challenge from the federal government. The states are also attempting to compel the federal EPA to regulate greenhouse gas emissions, which as of 2007 it has declined to do. On 19 May 2009, news reports indicate that the Federal EPA will largely adopt California's standards on greenhouse gas emissions.California and several other western states have passed bills requiring performance-based regulation of greenhouse gases from electricity generation. In an effort to decrease emissions from heavy-duty diesel engines faster, CARB's Carl Moyer Program funds upgrades that are in advance of regulations. The California ARB standard for light vehicle emissions is a regulation of equipment first, with verification of emissions second. The property owner of the vehicle is not permitted to modify, improve, or innovate solutions in order to pass a true emissions-only standard set for their vehicle driven on public highways. Therefore, California's attempt at regulation of emissions is a regulation of equipment, not of air quality. Vehicle owners are excluded from modifying their property in any way that has not been extensively researched and approved by CARB and still operate them on public highways. Latin America Argentina From 1 January 2016, all new heavy vehicles in Argentina must comply with Euro 5. From 1 January 2018, all new light and heavy vehicles in Argentina must comply with Euro 5. Brazil From 1 January 2012, all new heavy vehicles in Brazil must comply with Proconve P7 (similar to Euro 5) From 1 January 2015, all new light vehicles in Brazil must comply with Proconve L6 (similar to Euro 5). From 1 January 2022, all new light vehicles in Brazil must comply with Proconve L7 (similar to Euro 6). From 1 January 2023, all new heavy vehicles in Brazil must comply with Proconve P8 (similar to Euro 6). From 1 January 2025, the new light vehicle fleets in Brazil must comply with the first stage of Proconve L8 (automaker average). Chile From September 2014, all new cars in Chile must comply with Euro 5. From September 2022, all new light and medium vehicle models in Chile must comply with Euro 6b. From September 2024, all new light and medium vehicle models in Chile must comply with Euro 6c. Colombia From 1 January 2023, all new vehicles in Colombia must comply with Euro 6. Asia China Due to rapidly expanding wealth and prosperity, the number of coal power plants and cars on China's roads is rapidly growing, creating an ongoing pollution problem. China enacted its first emissions controls on automobiles in 2000, equivalent to Euro I standards. China's State Environmental Protection Administration (SEPA) upgraded emission controls again on 1 July 2004 to the Euro II standard. More stringent emission standard, National Standard III, equivalent to Euro III standards, went into effect on 1 July 2007. Plans were for Euro IV standards to take effect in 2010. Beijing introduced the Euro IV standard in advance on 1 January 2008, becoming the first city in mainland China to adopt this standard. From 1 January 2018, all new vehicles must comply with China 5 (similar to Euro 5). From 1 January 2021, all new vehicles in China must comply with China 6a (similar to Euro 6). From 1 July 2023, all new vehicles in China must comply with China 6b (stricter than Euro 6). Hong Kong From 1 January 2006, all new passenger cars with spark-ignition engines in Hong Kong must meet either Euro IV petrol standard, Japanese Heisei 17 standard or US EPA Tier 2 Bin 5 standard. For new passenger cars with compression-ignition engines, they must meet US EPA Tier 2 Bin 5 standard. The current standard is Euro 6C, it has been phased in since 2019. India Bharat stage emission standards are emission standards instituted by the Government of India to regulate the output of air pollutants from internal combustion engine equipment, including motor vehicles. The standards and the timeline for implementation are set by the Central Pollution Control Board under the Ministry of Environment & Forests. The standards, based on European regulations were first introduced in 2000. Progressively stringent norms have been rolled out since then. All new vehicles manufactured after the implementation of the norms have to be compliant with the regulations. By 2014, the country was under a combination of Euro 3 and Euro 4-based norms, with Euro 4 standards partly implemented in 13 major cities. Till April 2017, the entire country was under BS IV norms, which is based on Euro 4.As of now manufacturing and registration of BS VI vehicles has started, from April 2020 all BS VI manufacturing is mandatory, respectively. Palestine Since January 2012 vehicles which do not comply with Euro 6 emission values are not allowed to be imported to Palestine. Japan Background Starting 10 June 1968, the Japanese Government passed the Air Pollution Control Act which regulated all sources of air pollutants. As a result of the 1968 law, dispute resolutions were passed under the 1970 Air Pollution Dispute Resolution Act. As a result of the 1970 law, in 1973 the first installment of four sets of new emissions standards were introduced. Interim standards were introduced on 1 January 1975, and again for 1976. The final set of standards were introduced for 1978. While the standards were introduced they were not made immediately mandatory, instead tax breaks were offered for cars which passed them. The standards were based on those adopted by the original US Clean Air Act of 1970, but the test cycle included more slow city driving to correctly reflect the Japanese situation. The 1978 limits for mean emissions during a "Hot Start Test" of CO, hydrocarbons, and NOx were 2.1 grams per kilometre (3.38 g/mi) of CO, 0.25 grams per kilometre (0.40 g/mi) of HC, and 0.25 grams per kilometre (0.40 g/mi) of NOx respectively. Maximum limits are 2.7 grams per kilometre (4.35 g/mi) of CO, 0.39 grams per kilometre (0.63 g/mi) of HC, and 0.48 grams per kilometre (0.77 g/mi) of NOx. One interesting detail of the Japanese emissions standards was that they were introduced in a soft manner; that is, 1978 model year cars could be sold that did not meet the 1978 standards, but they would suffer various tax penalties. This gave manufacturers breathing room to properly engineer solutions and also incentivized fixing the best-selling models first, leading to smoother adoption of clean air standards and fewer drivability concerns than in many other markets. The "10 - 15 Mode Hot Cycle" test, used to determine individual fuel economy ratings and emissions observed from the vehicle being tested, use a specific testing regime.In 1992, to cope with NOx pollution problems from existing vehicle fleets in highly populated metropolitan areas, the Ministry of the Environment adopted the Law Concerning Special Measures to Reduce the Total Amount of Nitrogen Oxides Emitted from Motor Vehicles in Specified Areas, called in short The Motor Vehicle NOx Law. The regulation designated a total of 196 communities in the Tokyo, Saitama, Kanagawa, Osaka and Hyogo Prefectures as areas with significant air pollution due to nitrogen oxides emitted from motor vehicles. Under the Law, several measures had to be taken to control NOx from in-use vehicles, including enforcing emission standards for specified vehicle categories. The regulation was amended in June 2001 to tighten the existing NOx requirements and to add PM control provisions. The amended rule is called the "Law Concerning Special Measures to Reduce the Total Amount of Nitrogen Oxides and Particulate Matter Emitted from Motor Vehicles in Specified Areas", or in short the Automotive NOx and PM Law. Emission StandardsThe NOx and PM Law introduces emission standards for specified categories of in-use highway vehicles including commercial goods (cargo) vehicles such as trucks and vans, buses, and special purpose motor vehicles, irrespective of the fuel type. The regulation also applies to diesel powered passenger cars (but not to gasoline cars). In-use vehicles in the specified categories must meet 1997/98 emission standards for the respective new vehicle type (in the case of heavy duty engines NOx = 4.5 g/kWh, PM = 0.25 g/kWh). In other words, the 1997/98 new vehicle standards are retroactively applied to older vehicles already on the road. Vehicle owners have two methods to comply: Replace old vehicles with newer, cleaner models Retrofit old vehicles with approved NOx and PM control devicesVehicles have a grace period, between 8 and 12 years from the initial registration, to comply. The grace period depends on the vehicle type, as follows: Light commercial vehicles (GVW ≤ 2500 kg): 8 years Heavy commercial vehicles (GVW > 2500 kg): 9 years Micro buses (11-29 seats): 10 years Large buses (≥ 30 seats): 12 years Special vehicles (based on a cargo truck or bus): 10 years Diesel passenger cars: 9 yearsFurthermore, the regulation allows fulfillment of its requirements to be postponed by an additional 0.5–2.5 years, depending on the age of the vehicle. This delay was introduced in part to harmonize the NOx and PM Law with the Tokyo diesel retrofit program. The NOx and PM Law is enforced in connection with Japanese vehicle inspection program, where non-complying vehicles cannot undergo the inspection in the designated areas. This, in turn, may trigger an injunction on the vehicle operation under the Road Transport Vehicle Law. Turkey Diesel and gasoline sulphur content is regulated at 10 ppm. Turkey currently follows Euro VI for heavy duty commercial vehicles, and, in 2016 a couple of years after the EU, Turkey adopted Euro 6 for new types of light duty vehicles (LDV) and new types of passenger cars. Turkey is planning to use the worldwide harmonized light vehicles test procedure (WLTP).However, despite these tailpipe emission standards for new vehicle types there are many older diesel vehicles, no low-emission zones and no national limit on PM2.5 particulates so local pollution, including from older vehicles, is still a major health risk in some cities, such as Ankara. Concentrations of PM2.5 are 41 µg/m3 in Turkey, making it the country with the worst air pollution in Europe. The regulation for testing of existing vehicle exhaust gases is Official Newspaper number 30004 published 11 March 2017. An average of 135 g CO2/km for LDVs compared well with other countries in 2015, however unlike the EU there is no limit on carbon dioxide emissions. Vietnam From 1 January 2022, all new cars in Vietnam must comply with Euro 5. Africa Morocco From 1 January 2024, all new vehicles in Morocco must comply with Euro 6b. South Africa South Africa's first clean fuels programme was implemented in 2006 with the banning of lead from petrol and the reduction of sulphur levels in diesel from 3,000 parts per million (ppm) to 500ppm, along with a niche grade of 50 ppm. The Clean Fuels 2 standard, expected to begin in 2017, includes the reduction of sulphur to 10 ppm; the lowering of benzene from 5 percent to 1 percent of volume; the reduction of aromatics from 50 percent to 35 percent of volume; and the specification of olefins at 18 percent of volume. Oceania Australia Australian noxious emission standards are based on European regulations for light-duty and heavy-duty (heavy goods) vehicles, with acceptance of selected US and Japanese standards. The current policy is to fully harmonize Australian regulations with United Nations (UN) and Economic Commission for Europe (ECE) standards. In November 2013, the first stage of the stringent Euro 5 emission standards for light vehicles was introduced, which includes cars and light commercial vehicles. The development of emission standards for highway vehicles and engines is coordinated by the National Transport Commission (NTC) and the regulations—Australian Design Rules (ADR)—are administered by the Department of Infrastructure and Transport.All new vehicles manufactured or sold in the country must comply with the standards, which are tested by running the vehicle or engine in a standardized test cycle.In April 2023, the Australian government released its National Electric Vehicle Strategy, which included a commitment to introduce a Fuel Efficiency Standard to address greenhouse gas emissions. The government undertook consultation on the model for the standard in April and May 2023, and they intend to introduce legislation by the end of 2023 Research commissioned by environmental NGO Solar Citizens has calculated that a Fuel Efficiency Standard that started at 95g CO2/km and reduced to 0g CO2/km over ten years would save Australian motorists $11b over the first five years. See also Air pollution C. Arden Pope Carbon dioxide equivalent The Center for Clean Air Policy (in the US) Emission factor Emission test cycle Emissions trading Environmental standard European emission standards Driving cycle Flexible-fuel vehicle Fuel efficiency Mobile emission reduction credit Motor vehicle emissions National Emissions Standards for Hazardous Air Pollutants Ultra-low-sulfur diesel Vehicle emissions control References External links Dieselnet pages on vehicle emission standards. EPA National Vehicle and Fuel Emissions Laboratory. "Emission Standards Reference Guide" (PDF). 19 August 2015. (141 KB) for heavy duty and nonroad engines. Federal Income Tax Credits for Hybrids placed in service. EPA: History of Reducing Air Pollution from Transportation in the United StatesEU"Directive 1999/94/EC of the European Parliament and of the Council of 13 December 1999, relating to the availability of consumer information on fuel economy and CO2 emissions in respect of the marketing of new passenger cars" (PDF). (140 KB). Council Directive 80/1268/EEC Fuel consumption of motor vehicles.
insect farming
Insect farming is the practice of raising and breeding insects as livestock, also referred to as minilivestock or micro stock. Insects may be farmed for the commodities they produce (like silk, honey, lac or insect tea), or for them themselves; to be used as food, as feed, as a dye, and otherwise. Farming of popular insects Silkworms Silkworms, the caterpillars of the domestic silkmoth, are kept to produce silk, an elastic fiber made when they are in the process of creating a cocoon. Silk is commonly regarded as a major cash crop and is used in the crafting of many textiles. Mealworms The mealworm (Tenebrio molitor L.) is the larvae form of a species of darkling beetles (Coleoptera). The optimum incubation temperature is 25 ̊C - 27 ̊C and its embryonic development lasts 4 – 6 days. It has a long larvae period of about half a year with the optimum temperature and low moisture terminates. The protein content of Tenebrio molitor larvae, adult, exuvium and excreta are 46.44, 63.34, 32.87, and 18.51% respectively. Buffaloworms Buffaloworms, also called lesser mealworms, is the common name of Alphitobius diaperinus. Its larvae superficially resemble small wireworms or true mealworms (Tenebrio spp.). They are approximately 7 to 11 mm in length at the last instar. Freshly emerged larvae are a milky color. The pale color tinge returns to that of the first/second instar larva when preparing to molt, while a yellowish-brown appearance after molting. In addition, it was reported that it has the highest level of iron bioavailability. Honeybees Commodities harvested from honeybees include beeswax, bee bread, bee pollen, propolis, royal jelly, brood, and honey. All of the aforementioned are mostly used in food, however, being wax, beeswax has many other uses, such as being used in candles, and propolis may be used as a wood finish. However, the presence of honeybees can negatively affect abundance and diversity of wild bees, with consequences for pollination of crops. Lac insects Lac insects secrete a resinous substance called lac. Lac is used in many applications, from its use in food to being used as a colorant or as a wood finish. The majority of lac farming takes place in India and Thailand, with over 2 million residential employees. Cochineal Made into a red dye known as carmine, cochineal are incorporated into many products, including cosmetics, food, paint, and fabric. About 100,000 insects are needed to make a single kilogram of dye. The shade of red the dye yields depends on how the insect is processed. France is the world's largest importer of carmine. Crickets Among the hundreds of different types of crickets, the house cricket (Acheta domesticus) is the most common type used for human consumption. The cricket is one of the most nutritious edible insects, and in many parts of the world, crickets are consumed dry-roasted, baked, deep-fried, and boiled. Cricket consumption may take the form of cricket flour, a powder of dried and ground crickets, which is easily integrated into many food recipes. Crickets are commonly farmed for non-human animal food, as they provide much nutrition to the many species of reptiles, fish, birds and other mammals that consume them. Crickets are normally killed by deep freezing. Waxworms Waxworms are the larvae of wax moths. These caterpillars are used widely across the world for food, fish bait, animal testing and plastic degradation. Low in protein but high in fat content, they are a valuable source of fat for many insectivorous organisms. Waxworms are popular in many parts of the world, due to their ability to live in low temperatures and their simplicity in production. Cockroaches Cockroaches are farmed by the million in China, where they are used in traditional medicine and in cosmetics. The main species farmed is the American cockroach (Periplaneta americana). The cockroaches are reared on food such as potato and pumpkin peeling waste from restaurants, then scooped or vacuumed from their nests, killed in boiling water and dried in the sun. As feed and food Insects show promise as animal feed. For instance, fly larvae can replace fish meal due to the similar amino acid composition. It is possible to formulate fish meal to increase unsaturated fatty acid. Wild birds and free-range poultry can consume insects in the adult, larval and pupal forms naturally. Grasshoppers and moths, as well as houseflies, have been used as feed supplements for poultry. Apart from that, insects have potential as feed for reptiles, fish, mammals, as well as birds.Hundreds of species of crickets, grasshoppers, beetles, moths and various other insects are considered edible. Selected species are farmed for human consumption. Humans have been eating insects for as long as (according to some sources) 30,000 years. Today insects are becoming increasingly viable as a source of sustainably produced protein, as conventional meat forms are very land-intensive and produce large quantities of methane, a greenhouse gas. Insects bred in captivity offer a low space-intensive, highly feed-efficient, relatively pollution-free, high-protein source of food for both humans and non-human animals. Insects have a high nutritional value, dense protein content and micronutrient and probiotic potential. Insects such as crickets and mealworms have high concentrations of complete protein, vitamin B12, riboflavin and vitamin A. Insects offer an economical solution to increasingly pressing food security and environmental issues concerning the production and distribution of protein to feed a growing world population. Benefits Purported benefits of the use of insects as food include: Significantly lower amounts of resource and space use, lower amounts of waste produced, and emissions of very trace amounts of greenhouse gases. They include many vitamins and essential minerals, contain dietary fiber (which is not present in meat), and are a complete protein. The protein count of 100 g of cricket is nearly equivalent to the amount in 100 g of lean ground beef. As opposed to meat, lower costs are required to care for and produce insects. Faster growth and reproduction rates. Crickets mature rather quickly and are typically full-grown within 3 weeks to a month, and an individual female can lay from 1,200 to 1,500 eggs in three to four weeks. Cattle, however, become adults at 2 years, and the breeding ratio is four breeding animals for each market animal produced. Unlike meat, insects rarely transmit diseases such as H1N1, mad cow disease, or salmonella. Reduced feed Cattle use 12 times the amount of feed that crickets do to produce an equal amount of protein. Crickets also only use a quarter of the feed of sheep and one-half the amount of feed given to swine and chicken to produce an equivalent amount of protein. Crickets require only two pounds of feed to produce one pound of the finished product. Much of this efficiency is a result of crickets being ectothermic, as in they get their heat from the environment instead of having to expend energy to create their own body heat as typical mammals do. Nutrient efficiency Insects are nutrient-efficient compared to other meat sources. The insect protein content is comparable to most meat products. Likewise, the fatty acid composition of edible insects is comparable to fish lipids, with high levels of polyunsaturated fatty acids (PUFAs). In addition, all parts of edible insect are efficiently used whereas some parts of conventional livestock are not directly available for human consumption. The nutritional contents of insects vary with species as well as within species, depending on their metamorphic stage, habitat, and diet. For instance, the lipid composition of insects is largely dependent on their diet and metamorphic stage. Insects are abundant in other nutrients. Locusts, for example, contain between 8 and 20 mg of iron in every 100 grams of raw locust. Beef, on the other hand, contains roughly 6 mg of iron in the same amount of meat. Crickets are also very nutrient-efficient. For every 100 grams of substance, crickets contain 12.9 grams of protein, 121 calories, and 5.5 grams of fat. Beef contains more protein, with 23.5 grams in 100 grams of substance, but also has roughly three times the calories and four times the amount of fat as crickets do in 100 grams. Therefore, per 100 grams of substance, crickets contain only half the nutrients of beef, except for iron. High levels of iron are implicated in bowel cancer and heart disease. When considering the protein transition, cold-blooded insects can convert food more efficiently: crickets only need 2.1 kg feed for 1 kg ‘meat’, while poultry and cows need more than 2 times and 12 times of the feed, respectively. Greenhouse gas emissions The raising of livestock is responsible for 18% of all greenhouse gases emitted. Alternative sources of protein, such as insects, replace protein sourced from livestock and help decrease the number of greenhouse gases emitted from food production. Insects produce less carbon dioxide, ammonia and methane than livestock such as pigs and cattle, with no farmed insect species besides cockroaches releasing methane at all. Land usage Livestock raising accounts for 70% of agricultural land use. This results in a land-cover change that destroys local ecosystems and displaces people and wildlife. Insect farming is minimally space-intensive compared to other conventional livestock, and can even take place in populated urban centers. Processing methods With the concern for pain tolerance in animal health and welfare, processing the insects can be mainly concluded as: harvesting and cleaning, inactivation, heating and drying, depending on the final product and rearing methods. Harvesting and cleaning Insects at different life stages can be collected by sieving followed by water cleaning when it is necessary to remove biomass or excretion. Before processing, the insects are sieved and stored alive at 4 °C for about one day without any feed. Inactivation An inactivation step is needed to inactive any enzymes and microbes on the insects. The enzymatic browning reaction (mainly phenolase or phenol oxidase) can cause the brown or black color on the insect, which leads to discoloration and an off-flavor. Heat-treatment Sufficient heat treatment is required to kill enterobacteriaceae so that the product can meet safety requirements. D-value and Z-value can be used to estimate the effectiveness of heat treatments. The temperature and duration of the heating will cause insect proteins' denaturation and changes the functional properties of proteins. Drying To prevent spoilage, the products are dried to lower moisture content and prolong shelf life. Longer drying time results from a low evaporation rate due to the chitin layer, which can prevent the insect from dehydrating during their lifetime. So the product being in granule form gives the advantage of further drying. In general, insects have a moisture level in the range of 55-65%. A drying process decreasing the moisture content to a level of <10% is good for preservation. Besides the moisture level, oxidation of lipids can cause high levels of unsaturated fatty acids. Hence the processing steps influencing the final fat stability in products are necessary to be considered during drying. Regulations in Europe The use of insect meal as feed and food is limited by legislation. Insects can be used in Novel Food according to the European Union guidelines for market authorization of products. The European Union Commission accepted the use of insects for fish feed in July 2017. However, the power to promote the scale-up of insect production becomes difficult when few participate in this market to change the rules. In Europe, safety documents for certain insects and accompanying products are required by the European Union (EFSA) and NVWA. Footnotes References Humanity Needs to Start Farming Bugs, Popular Science Six-legged livestock: Edible insect farming, collection and marketing in Thailand, FAO Maybe It's Time To Swap Burgers for Bugs, NPR Bug farmer working to introduce insects to European diets, PRI Edible Insect Farming, FAO Eating insects: Sudden popularity Apartment Bug Farm Is A Big Business, Modern Farmer U.N. Urges Eating Insects, National Geographic Insect Food Emissions, The Guardian One Green Planet Insect farming research & edible insect species list Professional Insect Rearing. Strategical points and management method, Books on Demand, ISBN 9782322042777, November 2015. TedxTalks:Recipes for the future See also Entomophagy Insects as food Entomophagy in humans Butterfly ranching in Papua New Guinea Insect Farming and Trading Agency Welfare of farmed insects Cricket flour Maggot farming